(itstool) path: sect3/screen
<prompt>%</prompt> <userinput>zfs snapshot -r <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable></userinput>
<prompt>%</prompt> <userinput>zfs send -R <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable> | ssh <replaceable>someuser@backuphost</replaceable> zfs recv -dvu <replaceable>recvpool/backup</replaceable></userinput>
Context English Norwegian Bokmål State
A second snapshot called <replaceable>replica2</replaceable> was created. This second snapshot contains only the changes that were made to the file system between now and the previous snapshot, <replaceable>replica1</replaceable>. Using <command>zfs send -i</command> and indicating the pair of snapshots generates an incremental replica stream containing only the data that has changed. This can only succeed if the initial snapshot already exists on the receiving side.
<prompt>#</prompt> <userinput>zfs send -v -i <replaceable>mypool</replaceable>@<replaceable>replica1</replaceable> <replaceable>mypool</replaceable>@<replaceable>replica2</replaceable> | zfs receive <replaceable>/backup/mypool</replaceable></userinput>
send from @replica1 to mypool@replica2 estimated size is 5.02M
total estimated size is 5.02M

<prompt>#</prompt> <userinput>zpool list</userinput>
backup 960M 80.8M 879M - - 0% 8% 1.00x ONLINE -
mypool 960M 50.2M 910M - - 0% 5% 1.00x ONLINE -

<prompt>#</prompt> <userinput>zfs list</userinput>
backup 55.4M 240G 152K /backup
backup/mypool 55.3M 240G 55.2M /backup/mypool
mypool 55.6M 11.6G 55.0M /mypool

<prompt>#</prompt> <userinput>zfs list -t snapshot</userinput>
backup/mypool@replica1 104K - 50.2M -
backup/mypool@replica2 0 - 55.2M -
mypool@replica1 29.9K - 50.0M -
mypool@replica2 0 - 55.0M -
The incremental stream was successfully transferred. Only the data that had changed was replicated, rather than the entirety of <replaceable>replica1</replaceable>. Only the differences were sent, which took much less time to transfer and saved disk space by not copying the complete pool each time. This is useful when having to rely on slow networks or when costs per transferred byte must be considered.
A new file system, <replaceable>backup/mypool</replaceable>, is available with all of the files and data from the pool <replaceable>mypool</replaceable>. If <option>-P</option> is specified, the properties of the dataset will be copied, including compression settings, quotas, and mount points. When <option>-R</option> is specified, all child datasets of the indicated dataset will be copied, along with all of their properties. Sending and receiving can be automated so that regular backups are created on the second pool.
Sending Encrypted Backups over <application>SSH</application>
Sending streams over the network is a good way to keep a remote backup, but it does come with a drawback. Data sent over the network link is not encrypted, allowing anyone to intercept and transform the streams back into data without the knowledge of the sending user. This is undesirable, especially when sending the streams over the internet to a remote host. <application>SSH</application> can be used to securely encrypt data send over a network connection. Since <acronym>ZFS</acronym> only requires the stream to be redirected from standard output, it is relatively easy to pipe it through <application>SSH</application>. To keep the contents of the file system encrypted in transit and on the remote system, consider using <link xlink:href="">PEFS</link>.
A few settings and security precautions must be completed first. Only the necessary steps required for the <command>zfs send</command> operation are shown here. For more information on <application>SSH</application>, see <xref linkend="openssh"/>.
This configuration is required:
Passwordless <application>SSH</application> access between sending and receiving host using <application>SSH</application> keys
Normally, the privileges of the <systemitem class="username">root</systemitem> user are needed to send and receive streams. This requires logging in to the receiving system as <systemitem class="username">root</systemitem>. However, logging in as <systemitem class="username">root</systemitem> is disabled by default for security reasons. The <link linkend="zfs-zfs-allow">ZFS Delegation</link> system can be used to allow a non-<systemitem class="username">root</systemitem> user on each system to perform the respective send and receive operations.
On the sending system:
<prompt>#</prompt> <userinput>zfs allow -u someuser send,snapshot <replaceable>mypool</replaceable></userinput>
To mount the pool, the unprivileged user must own the directory, and regular users must be allowed to mount file systems. On the receiving system:
<prompt>#</prompt> <userinput>sysctl vfs.usermount=1</userinput>
vfs.usermount: 0 -&gt; 1
<prompt>#</prompt> <userinput>echo vfs.usermount=1 &gt;&gt; /etc/sysctl.conf</userinput>
<prompt>#</prompt> <userinput>zfs create <replaceable>recvpool/backup</replaceable></userinput>
<prompt>#</prompt> <userinput>zfs allow -u <replaceable>someuser</replaceable> create,mount,receive <replaceable>recvpool/backup</replaceable></userinput>
<prompt>#</prompt> <userinput>chown <replaceable>someuser</replaceable> <replaceable>/recvpool/backup</replaceable></userinput>
The unprivileged user now has the ability to receive and mount datasets, and the <replaceable>home</replaceable> dataset can be replicated to the remote system:
<prompt>%</prompt> <userinput>zfs snapshot -r <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable></userinput>
<prompt>%</prompt> <userinput>zfs send -R <replaceable>mypool/home</replaceable>@<replaceable>monday</replaceable> | ssh <replaceable>someuser@backuphost</replaceable> zfs recv -dvu <replaceable>recvpool/backup</replaceable></userinput>
A recursive snapshot called <replaceable>monday</replaceable> is made of the file system dataset <replaceable>home</replaceable> that resides on the pool <replaceable>mypool</replaceable>. Then it is sent with <command>zfs send -R</command> to include the dataset, all child datasets, snapshots, clones, and settings in the stream. The output is piped to the waiting <command>zfs receive</command> on the remote host <replaceable>backuphost</replaceable> through <application>SSH</application>. Using a fully qualified domain name or IP address is recommended. The receiving machine writes the data to the <replaceable>backup</replaceable> dataset on the <replaceable>recvpool</replaceable> pool. Adding <option>-d</option> to <command>zfs recv</command> overwrites the name of the pool on the receiving side with the name of the snapshot. <option>-u</option> causes the file systems to not be mounted on the receiving side. When <option>-v</option> is included, more detail about the transfer is shown, including elapsed time and the amount of data transferred.
Dataset, User, and Group Quotas
<link linkend="zfs-term-quota">Dataset quotas</link> are used to restrict the amount of space that can be consumed by a particular dataset. <link linkend="zfs-term-refquota">Reference Quotas</link> work in very much the same way, but only count the space used by the dataset itself, excluding snapshots and child datasets. Similarly, <link linkend="zfs-term-userquota">user</link> and <link linkend="zfs-term-groupquota">group</link> quotas can be used to prevent users or groups from using all of the space in the pool or dataset.
The following examples assume that the users already exist in the system. Before adding a user to the system, make sure to create their home dataset first and set the <option>mountpoint</option> to <literal>/home/<replaceable>bob</replaceable></literal>. Then, create the user and make the home directory point to the dataset's <option>mountpoint</option> location. This will properly set owner and group permissions without shadowing any pre-existing home directory paths that might exist.
To enforce a dataset quota of 10 GB for <filename>storage/home/bob</filename>:
<prompt>#</prompt> <userinput>zfs set quota=10G storage/home/bob</userinput>
To enforce a reference quota of 10 GB for <filename>storage/home/bob</filename>:
<prompt>#</prompt> <userinput>zfs set refquota=10G storage/home/bob</userinput>
To remove a quota of 10 GB for <filename>storage/home/bob</filename>:
<prompt>#</prompt> <userinput>zfs set quota=none storage/home/bob</userinput>
The general format is <literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>, and the user's name must be in one of these formats:
<acronym>POSIX</acronym> compatible name such as <replaceable>joe</replaceable>.
<acronym>POSIX</acronym> numeric ID such as <replaceable>789</replaceable>.
<acronym>SID</acronym> name such as <replaceable></replaceable>.
<acronym>SID</acronym> numeric ID such as <replaceable>S-1-123-456-789</replaceable>.


No matching activity found.

Browse all component changes


English Norwegian Bokmål
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect3/screen
Source string location
String age
6 months ago
Source string age
a year ago
Translation file
books/nb_NO/handbook.po, string 6896