Translation

(itstool) path: sect2/screen
<prompt>#</prompt> <userinput>zfs clone <replaceable>camino/home/joe</replaceable>@<replaceable>backup</replaceable> <replaceable>camino/home/joenew</replaceable></userinput>
<prompt>#</prompt> <userinput>ls /usr/home/joe*</userinput>
/usr/home/joe:
backup.txz plans.txt

/usr/home/joenew:
backup.txz plans.txt
<prompt>#</prompt> <userinput>df -h /usr/home</userinput>
Filesystem Size Used Avail Capacity Mounted on
usr/home/joe 1.3G 31k 1.3G 0% /usr/home/joe
usr/home/joenew 1.3G 31k 1.3G 0% /usr/home/joenew
0/5710
Context English Norwegian Bokmål State
Restoring Individual Files from Snapshots
Snapshots are mounted in a hidden directory under the parent dataset: <filename>.zfs/snapshots/<replaceable>snapshotname</replaceable></filename>. By default, these directories will not be displayed even when a standard <command>ls -a</command> is issued. Although the directory is not displayed, it is there nevertheless and can be accessed like any normal directory. The property named <literal>snapdir</literal> controls whether these hidden directories show up in a directory listing. Setting the property to <literal>visible</literal> allows them to appear in the output of <command>ls</command> and other commands that deal with directory contents.
<prompt>#</prompt> <userinput>zfs get snapdir <replaceable>mypool/var/tmp</replaceable></userinput>
NAME PROPERTY VALUE SOURCE
mypool/var/tmp snapdir hidden default
<prompt>#</prompt> <userinput>ls -a /var/tmp</userinput>
. .. passwd vi.recover
<prompt>#</prompt> <userinput>zfs set snapdir=visible <replaceable>mypool/var/tmp</replaceable></userinput>
<prompt>#</prompt> <userinput>ls -a /var/tmp</userinput>
. .. .zfs passwd vi.recover
Individual files can easily be restored to a previous state by copying them from the snapshot back to the parent dataset. The directory structure below <filename>.zfs/snapshot</filename> has a directory named exactly like the snapshots taken earlier to make it easier to identify them. In the next example, it is assumed that a file is to be restored from the hidden <filename>.zfs</filename> directory by copying it from the snapshot that contained the latest version of the file:
<prompt>#</prompt> <userinput>rm /var/tmp/passwd</userinput>
<prompt>#</prompt> <userinput>ls -a /var/tmp</userinput>
. .. .zfs vi.recover
<prompt>#</prompt> <userinput>ls /var/tmp/.zfs/snapshot</userinput>
after_cp my_recursive_snapshot
<prompt>#</prompt> <userinput>ls /var/tmp/.zfs/snapshot/<replaceable>after_cp</replaceable></userinput>
passwd vi.recover
<prompt>#</prompt> <userinput>cp /var/tmp/.zfs/snapshot/<replaceable>after_cp/passwd</replaceable> <replaceable>/var/tmp</replaceable></userinput>
When <command>ls .zfs/snapshot</command> was issued, the <literal>snapdir</literal> property might have been set to hidden, but it would still be possible to list the contents of that directory. It is up to the administrator to decide whether these directories will be displayed. It is possible to display these for certain datasets and prevent it for others. Copying files or directories from this hidden <filename>.zfs/snapshot</filename> is simple enough. Trying it the other way around results in this error:
<prompt>#</prompt> <userinput>cp <replaceable>/etc/rc.conf</replaceable> /var/tmp/.zfs/snapshot/<replaceable>after_cp/</replaceable></userinput>
cp: /var/tmp/.zfs/snapshot/after_cp/rc.conf: Read-only file system
The error reminds the user that snapshots are read-only and cannot be changed after creation. Files cannot be copied into or removed from snapshot directories because that would change the state of the dataset they represent.
Snapshots consume space based on how much the parent file system has changed since the time of the snapshot. The <literal>written</literal> property of a snapshot tracks how much space is being used by the snapshot.
Snapshots are destroyed and the space reclaimed with <command>zfs destroy <replaceable>dataset</replaceable>@<replaceable>snapshot</replaceable></command>. Adding <option>-r</option> recursively removes all snapshots with the same name under the parent dataset. Adding <option>-n -v</option> to the command displays a list of the snapshots that would be deleted and an estimate of how much space would be reclaimed without performing the actual destroy operation.
Managing Clones
A clone is a copy of a snapshot that is treated more like a regular dataset. Unlike a snapshot, a clone is not read only, is mounted, and can have its own properties. Once a clone has been created using <command>zfs clone</command>, the snapshot it was created from cannot be destroyed. The child/parent relationship between the clone and the snapshot can be reversed using <command>zfs promote</command>. After a clone has been promoted, the snapshot becomes a child of the clone, rather than of the original parent dataset. This will change how the space is accounted, but not actually change the amount of space consumed. The clone can be mounted at any point within the <acronym>ZFS</acronym> file system hierarchy, not just below the original location of the snapshot.
To demonstrate the clone feature, this example dataset is used:
<prompt>#</prompt> <userinput>zfs list -rt all <replaceable>camino/home/joe</replaceable></userinput>
NAME USED AVAIL REFER MOUNTPOINT
camino/home/joe 108K 1.3G 87K /usr/home/joe
camino/home/joe@plans 21K - 85.5K -
camino/home/joe@backup 0K - 87K -
A typical use for clones is to experiment with a specific dataset while keeping the snapshot around to fall back to in case something goes wrong. Since snapshots cannot be changed, a read/write clone of a snapshot is created. After the desired result is achieved in the clone, the clone can be promoted to a dataset and the old file system removed. This is not strictly necessary, as the clone and dataset can coexist without problems.
<prompt>#</prompt> <userinput>zfs clone <replaceable>camino/home/joe</replaceable>@<replaceable>backup</replaceable> <replaceable>camino/home/joenew</replaceable></userinput>
<prompt>#</prompt> <userinput>ls /usr/home/joe*</userinput>
/usr/home/joe:
backup.txz plans.txt

/usr/home/joenew:
backup.txz plans.txt
<prompt>#</prompt> <userinput>df -h /usr/home</userinput>
Filesystem Size Used Avail Capacity Mounted on
usr/home/joe 1.3G 31k 1.3G 0% /usr/home/joe
usr/home/joenew 1.3G 31k 1.3G 0% /usr/home/joenew
After a clone is created it is an exact copy of the state the dataset was in when the snapshot was taken. The clone can now be changed independently from its originating dataset. The only connection between the two is the snapshot. <acronym>ZFS</acronym> records this connection in the property <literal>origin</literal>. Once the dependency between the snapshot and the clone has been removed by promoting the clone using <command>zfs promote</command>, the <literal>origin</literal> of the clone is removed as it is now an independent dataset. This example demonstrates it:
<prompt>#</prompt> <userinput>zfs get origin <replaceable>camino/home/joenew</replaceable></userinput>
NAME PROPERTY VALUE SOURCE
camino/home/joenew origin camino/home/joe@backup -
<prompt>#</prompt> <userinput>zfs promote <replaceable>camino/home/joenew</replaceable></userinput>
<prompt>#</prompt> <userinput>zfs get origin <replaceable>camino/home/joenew</replaceable></userinput>
NAME PROPERTY VALUE SOURCE
camino/home/joenew origin - -
After making some changes like copying <filename>loader.conf</filename> to the promoted clone, for example, the old directory becomes obsolete in this case. Instead, the promoted clone can replace it. This can be achieved by two consecutive commands: <command>zfs destroy</command> on the old dataset and <command>zfs rename</command> on the clone to name it like the old dataset (it could also get an entirely different name).
<prompt>#</prompt> <userinput>cp <replaceable>/boot/defaults/loader.conf</replaceable> <replaceable>/usr/home/joenew</replaceable></userinput>
<prompt>#</prompt> <userinput>zfs destroy -f <replaceable>camino/home/joe</replaceable></userinput>
<prompt>#</prompt> <userinput>zfs rename <replaceable>camino/home/joenew</replaceable> <replaceable>camino/home/joe</replaceable></userinput>
<prompt>#</prompt> <userinput>ls /usr/home/joe</userinput>
backup.txz loader.conf plans.txt
<prompt>#</prompt> <userinput>df -h <replaceable>/usr/home</replaceable></userinput>
Filesystem Size Used Avail Capacity Mounted on
usr/home/joe 1.3G 128k 1.3G 0% /usr/home/joe
The cloned snapshot is now handled like an ordinary dataset. It contains all the data from the original snapshot plus the files that were added to it like <filename>loader.conf</filename>. Clones can be used in different scenarios to provide useful features to ZFS users. For example, jails could be provided as snapshots containing different sets of installed applications. Users can clone these snapshots and add their own applications as they see fit. Once they are satisfied with the changes, the clones can be promoted to full datasets and provided to end users to work with like they would with a real dataset. This saves time and administrative overhead when providing these jails.
Replication
Keeping data on a single pool in one location exposes it to risks like theft and natural or human disasters. Making regular backups of the entire pool is vital. <acronym>ZFS</acronym> provides a built-in serialization feature that can send a stream representation of the data to standard output. Using this technique, it is possible to not only store the data on another pool connected to the local system, but also to send it over a network to another system. Snapshots are the basis for this replication (see the section on <link linkend="zfs-zfs-snapshot"><acronym>ZFS</acronym> snapshots</link>). The commands used for replicating data are <command>zfs send</command> and <command>zfs receive</command>.
These examples demonstrate <acronym>ZFS</acronym> replication with these two pools:
<prompt>#</prompt> <userinput>zpool list</userinput>
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 960M 77K 896M - - 0% 0% 1.00x ONLINE -
mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -
The pool named <replaceable>mypool</replaceable> is the primary pool where data is written to and read from on a regular basis. A second pool, <replaceable>backup</replaceable> is used as a standby in case the primary pool becomes unavailable. Note that this fail-over is not done automatically by <acronym>ZFS</acronym>, but must be manually done by a system administrator when needed. A snapshot is used to provide a consistent version of the file system to be replicated. Once a snapshot of <replaceable>mypool</replaceable> has been created, it can be copied to the <replaceable>backup</replaceable> pool. Only snapshots can be replicated. Changes made since the most recent snapshot will not be included.
<prompt>#</prompt> <userinput>zfs snapshot <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable></userinput>
<prompt>#</prompt> <userinput>zfs list -t snapshot</userinput>
NAME USED AVAIL REFER MOUNTPOINT
mypool@backup1 0 - 43.6M -
Now that a snapshot exists, <command>zfs send</command> can be used to create a stream representing the contents of the snapshot. This stream can be stored as a file or received by another pool. The stream is written to standard output, but must be redirected to a file or pipe or an error is produced:
<prompt>#</prompt> <userinput>zfs send <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable></userinput>
Error: Stream can not be written to a terminal.
You must redirect standard output.
To back up a dataset with <command>zfs send</command>, redirect to a file located on the mounted backup pool. Ensure that the pool has enough free space to accommodate the size of the snapshot being sent, which means all of the data contained in the snapshot, not just the changes from the previous snapshot.
<prompt>#</prompt> <userinput>zfs send <replaceable>mypool</replaceable>@<replaceable>backup1</replaceable> &gt; <replaceable>/backup/backup1</replaceable></userinput>
<prompt>#</prompt> <userinput>zpool list</userinput>
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 960M 63.7M 896M - - 0% 6% 1.00x ONLINE -
mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -

Loading…

No matching activity found.

Browse all component changes

Glossary

English Norwegian Bokmål
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect2/screen
Flags
no-wrap
Source string location
book.translate.xml:41701
String age
6 months ago
Source string age
a year ago
Translation file
books/nb_NO/handbook.po, string 6859