(itstool) path: sect3/para
Snapshots consume space based on how much the parent file system has changed since the time of the snapshot. The <literal>written</literal> property of a snapshot tracks how much space is being used by the snapshot.
Context English Spanish State
At this point, the user realized that too many files were deleted and wants them back. <acronym>ZFS</acronym> provides an easy way to get them back using rollbacks, but only when snapshots of important data are performed on a regular basis. To get the files back and start over from the last snapshot, issue the command:
<prompt>#</prompt> <userinput>zfs rollback <replaceable>mypool/var/tmp@diff_snapshot</replaceable></userinput>
<prompt>#</prompt> <userinput>ls /var/tmp</userinput>
passwd passwd.copy vi.recover
The rollback operation restored the dataset to the state of the last snapshot. It is also possible to roll back to a snapshot that was taken much earlier and has other snapshots that were created after it. When trying to do this, <acronym>ZFS</acronym> will issue this warning:
<prompt>#</prompt> <userinput>zfs list -rt snapshot <replaceable>mypool/var/tmp</replaceable></userinput>
mypool/var/tmp@my_recursive_snapshot 88K - 152K -
mypool/var/tmp@after_cp 53.5K - 118K -
mypool/var/tmp@diff_snapshot 0 - 120K -
<prompt>#</prompt> <userinput>zfs rollback <replaceable>mypool/var/tmp@my_recursive_snapshot</replaceable></userinput>
cannot rollback to 'mypool/var/tmp@my_recursive_snapshot': more recent snapshots exist
use '-r' to force deletion of the following snapshots:
This warning means that snapshots exist between the current state of the dataset and the snapshot to which the user wants to roll back. To complete the rollback, these snapshots must be deleted. <acronym>ZFS</acronym> cannot track all the changes between different states of the dataset, because snapshots are read-only. <acronym>ZFS</acronym> will not delete the affected snapshots unless the user specifies <option>-r</option> to indicate that this is the desired action. If that is the intention, and the consequences of losing all intermediate snapshots is understood, the command can be issued:
<prompt>#</prompt> <userinput>zfs rollback -r <replaceable>mypool/var/tmp@my_recursive_snapshot</replaceable></userinput>
<prompt>#</prompt> <userinput>zfs list -rt snapshot <replaceable>mypool/var/tmp</replaceable></userinput>
mypool/var/tmp@my_recursive_snapshot 8K - 152K -
<prompt>#</prompt> <userinput>ls /var/tmp</userinput>
The output from <command>zfs list -t snapshot</command> confirms that the intermediate snapshots were removed as a result of <command>zfs rollback -r</command>.
Restoring Individual Files from Snapshots
Snapshots are mounted in a hidden directory under the parent dataset: <filename>.zfs/snapshots/<replaceable>snapshotname</replaceable></filename>. By default, these directories will not be displayed even when a standard <command>ls -a</command> is issued. Although the directory is not displayed, it is there nevertheless and can be accessed like any normal directory. The property named <literal>snapdir</literal> controls whether these hidden directories show up in a directory listing. Setting the property to <literal>visible</literal> allows them to appear in the output of <command>ls</command> and other commands that deal with directory contents.
<prompt>#</prompt> <userinput>zfs get snapdir <replaceable>mypool/var/tmp</replaceable></userinput>
mypool/var/tmp snapdir hidden default
<prompt>#</prompt> <userinput>ls -a /var/tmp</userinput>
. .. passwd vi.recover
<prompt>#</prompt> <userinput>zfs set snapdir=visible <replaceable>mypool/var/tmp</replaceable></userinput>
<prompt>#</prompt> <userinput>ls -a /var/tmp</userinput>
. .. .zfs passwd vi.recover
Individual files can easily be restored to a previous state by copying them from the snapshot back to the parent dataset. The directory structure below <filename>.zfs/snapshot</filename> has a directory named exactly like the snapshots taken earlier to make it easier to identify them. In the next example, it is assumed that a file is to be restored from the hidden <filename>.zfs</filename> directory by copying it from the snapshot that contained the latest version of the file:
<prompt>#</prompt> <userinput>rm /var/tmp/passwd</userinput>
<prompt>#</prompt> <userinput>ls -a /var/tmp</userinput>
. .. .zfs vi.recover
<prompt>#</prompt> <userinput>ls /var/tmp/.zfs/snapshot</userinput>
after_cp my_recursive_snapshot
<prompt>#</prompt> <userinput>ls /var/tmp/.zfs/snapshot/<replaceable>after_cp</replaceable></userinput>
passwd vi.recover
<prompt>#</prompt> <userinput>cp /var/tmp/.zfs/snapshot/<replaceable>after_cp/passwd</replaceable> <replaceable>/var/tmp</replaceable></userinput>
When <command>ls .zfs/snapshot</command> was issued, the <literal>snapdir</literal> property might have been set to hidden, but it would still be possible to list the contents of that directory. It is up to the administrator to decide whether these directories will be displayed. It is possible to display these for certain datasets and prevent it for others. Copying files or directories from this hidden <filename>.zfs/snapshot</filename> is simple enough. Trying it the other way around results in this error:
<prompt>#</prompt> <userinput>cp <replaceable>/etc/rc.conf</replaceable> /var/tmp/.zfs/snapshot/<replaceable>after_cp/</replaceable></userinput>
cp: /var/tmp/.zfs/snapshot/after_cp/rc.conf: Read-only file system
The error reminds the user that snapshots are read-only and cannot be changed after creation. Files cannot be copied into or removed from snapshot directories because that would change the state of the dataset they represent.
Snapshots consume space based on how much the parent file system has changed since the time of the snapshot. The <literal>written</literal> property of a snapshot tracks how much space is being used by the snapshot.
Snapshots are destroyed and the space reclaimed with <command>zfs destroy <replaceable>dataset</replaceable>@<replaceable>snapshot</replaceable></command>. Adding <option>-r</option> recursively removes all snapshots with the same name under the parent dataset. Adding <option>-n -v</option> to the command displays a list of the snapshots that would be deleted and an estimate of how much space would be reclaimed without performing the actual destroy operation.
Managing Clones
A clone is a copy of a snapshot that is treated more like a regular dataset. Unlike a snapshot, a clone is not read only, is mounted, and can have its own properties. Once a clone has been created using <command>zfs clone</command>, the snapshot it was created from cannot be destroyed. The child/parent relationship between the clone and the snapshot can be reversed using <command>zfs promote</command>. After a clone has been promoted, the snapshot becomes a child of the clone, rather than of the original parent dataset. This will change how the space is accounted, but not actually change the amount of space consumed. The clone can be mounted at any point within the <acronym>ZFS</acronym> file system hierarchy, not just below the original location of the snapshot.
To demonstrate the clone feature, this example dataset is used:
<prompt>#</prompt> <userinput>zfs list -rt all <replaceable>camino/home/joe</replaceable></userinput>
camino/home/joe 108K 1.3G 87K /usr/home/joe
camino/home/joe@plans 21K - 85.5K -
camino/home/joe@backup 0K - 87K -
A typical use for clones is to experiment with a specific dataset while keeping the snapshot around to fall back to in case something goes wrong. Since snapshots cannot be changed, a read/write clone of a snapshot is created. After the desired result is achieved in the clone, the clone can be promoted to a dataset and the old file system removed. This is not strictly necessary, as the clone and dataset can coexist without problems.
<prompt>#</prompt> <userinput>zfs clone <replaceable>camino/home/joe</replaceable>@<replaceable>backup</replaceable> <replaceable>camino/home/joenew</replaceable></userinput>
<prompt>#</prompt> <userinput>ls /usr/home/joe*</userinput>
backup.txz plans.txt

backup.txz plans.txt
<prompt>#</prompt> <userinput>df -h /usr/home</userinput>
Filesystem Size Used Avail Capacity Mounted on
usr/home/joe 1.3G 31k 1.3G 0% /usr/home/joe
usr/home/joenew 1.3G 31k 1.3G 0% /usr/home/joenew
After a clone is created it is an exact copy of the state the dataset was in when the snapshot was taken. The clone can now be changed independently from its originating dataset. The only connection between the two is the snapshot. <acronym>ZFS</acronym> records this connection in the property <literal>origin</literal>. Once the dependency between the snapshot and the clone has been removed by promoting the clone using <command>zfs promote</command>, the <literal>origin</literal> of the clone is removed as it is now an independent dataset. This example demonstrates it:
<prompt>#</prompt> <userinput>zfs get origin <replaceable>camino/home/joenew</replaceable></userinput>
camino/home/joenew origin camino/home/joe@backup -
<prompt>#</prompt> <userinput>zfs promote <replaceable>camino/home/joenew</replaceable></userinput>
<prompt>#</prompt> <userinput>zfs get origin <replaceable>camino/home/joenew</replaceable></userinput>
camino/home/joenew origin - -
After making some changes like copying <filename>loader.conf</filename> to the promoted clone, for example, the old directory becomes obsolete in this case. Instead, the promoted clone can replace it. This can be achieved by two consecutive commands: <command>zfs destroy</command> on the old dataset and <command>zfs rename</command> on the clone to name it like the old dataset (it could also get an entirely different name).
<prompt>#</prompt> <userinput>cp <replaceable>/boot/defaults/loader.conf</replaceable> <replaceable>/usr/home/joenew</replaceable></userinput>
<prompt>#</prompt> <userinput>zfs destroy -f <replaceable>camino/home/joe</replaceable></userinput>
<prompt>#</prompt> <userinput>zfs rename <replaceable>camino/home/joenew</replaceable> <replaceable>camino/home/joe</replaceable></userinput>
<prompt>#</prompt> <userinput>ls /usr/home/joe</userinput>
backup.txz loader.conf plans.txt
<prompt>#</prompt> <userinput>df -h <replaceable>/usr/home</replaceable></userinput>
Filesystem Size Used Avail Capacity Mounted on
usr/home/joe 1.3G 128k 1.3G 0% /usr/home/joe
The cloned snapshot is now handled like an ordinary dataset. It contains all the data from the original snapshot plus the files that were added to it like <filename>loader.conf</filename>. Clones can be used in different scenarios to provide useful features to ZFS users. For example, jails could be provided as snapshots containing different sets of installed applications. Users can clone these snapshots and add their own applications as they see fit. Once they are satisfied with the changes, the clones can be promoted to full datasets and provided to end users to work with like they would with a real dataset. This saves time and administrative overhead when providing these jails.
Keeping data on a single pool in one location exposes it to risks like theft and natural or human disasters. Making regular backups of the entire pool is vital. <acronym>ZFS</acronym> provides a built-in serialization feature that can send a stream representation of the data to standard output. Using this technique, it is possible to not only store the data on another pool connected to the local system, but also to send it over a network to another system. Snapshots are the basis for this replication (see the section on <link linkend="zfs-zfs-snapshot"><acronym>ZFS</acronym> snapshots</link>). The commands used for replicating data are <command>zfs send</command> and <command>zfs receive</command>.
These examples demonstrate <acronym>ZFS</acronym> replication with these two pools:


User avatar None

New source string

FreeBSD Doc / books_handbookSpanish

New source string a year ago
Browse all component changes


English Spanish
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect3/para
Source string location
String age
a year ago
Source string age
a year ago
Translation file
books/es_ES/handbook.po, string 6852