The translation is temporarily closed for contributions due to maintenance, please come back later.

Source string Read only

(itstool) path: sect1/title
Context English State
<emphasis><varname>vfs.zfs.resilver_delay</varname></emphasis> - Number of milliseconds of delay inserted between each I/O during a <link linkend="zfs-term-resilver">resilver</link>. To ensure that a resilver does not interfere with the normal operation of the pool, if any other I/O is happening the resilver will delay between each command. This value controls the limit of total <acronym>IOPS</acronym> (I/Os Per Second) generated by the resilver. The granularity of the setting is determined by the value of <varname>kern.hz</varname> which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective <acronym>IOPS</acronym> limit. The default value is 2, resulting in a limit of: 1000 ticks/sec / 2 = 500 <acronym>IOPS</acronym>. Returning the pool to an <link linkend="zfs-term-online">Online</link> state may be more important if another device failing could <link linkend="zfs-term-faulted">Fault</link> the pool, causing data loss. A value of 0 will give the resilver operation the same priority as other operations, speeding the healing process. The speed of resilver is only limited when there has been other recent activity on the pool, as determined by <link linkend="zfs-advanced-tuning-scan_idle"><varname>vfs.zfs.scan_idle</varname></link>. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.scan_idle</varname></emphasis> - Number of milliseconds since the last operation before the pool is considered idle. When the pool is idle the rate limiting for <link linkend="zfs-term-scrub"><command>scrub</command></link> and <link linkend="zfs-term-resilver">resilver</link> are disabled. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.txg.timeout</varname></emphasis> - Maximum number of seconds between <link linkend="zfs-term-txg">transaction group</link>s. The current transaction group will be written to the pool and a fresh transaction group started if this amount of time has elapsed since the previous transaction group. A transaction group my be triggered earlier if enough data is written. The default value is 5 seconds. A larger value may improve read performance by delaying asynchronous writes, but this may cause uneven performance when the transaction group is written. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<acronym>ZFS</acronym> on i386
Some of the features provided by <acronym>ZFS</acronym> are memory intensive, and may require tuning for maximum efficiency on systems with limited <acronym>RAM</acronym>.
As a bare minimum, the total system memory should be at least one gigabyte. The amount of recommended <acronym>RAM</acronym> depends upon the size of the pool and which <acronym>ZFS</acronym> features are used. A general rule of thumb is 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, a general rule of thumb is 5 GB of RAM per TB of storage to be deduplicated. While some users successfully use <acronym>ZFS</acronym> with less <acronym>RAM</acronym>, systems under heavy load may panic due to memory exhaustion. Further tuning may be required for systems with less than the recommended RAM requirements.
Kernel Configuration
Due to the address space limitations of the <trademark>i386</trademark> platform, <acronym>ZFS</acronym> users on the <trademark>i386</trademark> architecture must add this option to a custom kernel configuration file, rebuild the kernel, and reboot:
options KVA_PAGES=512
This expands the kernel address space, allowing the <varname>vm.kvm_size</varname> tunable to be pushed beyond the currently imposed limit of 1 GB, or the limit of 2 GB for <acronym>PAE</acronym>. To find the most suitable value for this option, divide the desired address space in megabytes by four. In this example, it is <literal>512</literal> for 2 GB.
Loader Tunables
The <filename>kmem</filename> address space can be increased on all FreeBSD architectures. On a test system with 1 GB of physical memory, success was achieved with these options added to <filename>/boot/loader.conf</filename>, and the system restarted:
For a more detailed list of recommendations for <acronym>ZFS</acronym>-related tuning, see <link xlink:href=""/>.
Additional Resources
<link xlink:href="">OpenZFS</link>
<link xlink:href="">FreeBSD Wiki - <acronym>ZFS</acronym> Tuning</link>
<link xlink:href="">Oracle Solaris <acronym>ZFS</acronym> Administration Guide</link>
<link xlink:href="">Calomel Blog - <acronym>ZFS</acronym> Raidz Performance, Capacity and Integrity</link>
<acronym>ZFS</acronym> Features and Terminology
<acronym>ZFS</acronym> is a fundamentally different file system because it is more than just a file system. <acronym>ZFS</acronym> combines the roles of file system and volume manager, enabling additional storage devices to be added to a live system and having the new space available on all of the existing file systems in that pool immediately. By combining the traditionally separate roles, <acronym>ZFS</acronym> is able to overcome previous limitations that prevented <acronym>RAID</acronym> groups being able to grow. Each top level device in a pool is called a <emphasis>vdev</emphasis>, which can be a simple disk or a <acronym>RAID</acronym> transformation such as a mirror or <acronym>RAID-Z</acronym> array. <acronym>ZFS</acronym> file systems (called <emphasis>datasets</emphasis>) each have access to the combined free space of the entire pool. As blocks are allocated from the pool, the space available to each file system decreases. This approach avoids the common pitfall with extensive partitioning where free space becomes fragmented across the partitions.
A storage <emphasis>pool</emphasis> is the most basic building block of <acronym>ZFS</acronym>. A pool is made up of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes). These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a <acronym>GUID</acronym>. The features available are determined by the <acronym>ZFS</acronym> version number on the pool.
vdev Types
<emphasis>Disk</emphasis> - The most basic type of vdev is a standard block device. This can be an entire disk (such as <filename><replaceable>/dev/ada0</replaceable></filename> or <filename><replaceable>/dev/da0</replaceable></filename>) or a partition (<filename><replaceable>/dev/ada0p3</replaceable></filename>). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation.
Using an entire disk as part of a bootable pool is strongly discouraged, as this may render the pool unbootable. Likewise, you should not use an entire disk as part of a mirror or <acronym>RAID-Z</acronym> vdev. These are because it is impossible to reliably determine the size of an unpartitioned disk at boot time and because there's no place to put in boot code.
<emphasis>File</emphasis> - In addition to disks, <acronym>ZFS</acronym> pools can be backed by regular files, this is especially useful for testing and experimentation. Use the full path to the file as the device path in <command>zpool create</command>. All vdevs must be at least 128 MB in size.
<emphasis>Mirror</emphasis> - When creating a mirror, specify the <literal>mirror</literal> keyword followed by the list of member devices for the mirror. A mirror consists of two or more devices, all data will be written to all member devices. A mirror vdev will only hold as much data as its smallest member. A mirror vdev can withstand the failure of all but one of its members without losing any data.
A regular single disk vdev can be upgraded to a mirror vdev at any time with <command>zpool <link linkend="zfs-zpool-attach">attach</link></command>.
<emphasis><acronym>RAID-Z</acronym></emphasis> - <acronym>ZFS</acronym> implements <acronym>RAID-Z</acronym>, a variation on standard <acronym>RAID-5</acronym> that offers better distribution of parity and eliminates the <quote><acronym>RAID-5</acronym> write hole</quote> in which the data and parity information become inconsistent after an unexpected restart. <acronym>ZFS</acronym> supports three levels of <acronym>RAID-Z</acronym> which provide varying levels of redundancy in exchange for decreasing levels of usable storage. The types are named <acronym>RAID-Z1</acronym> through <acronym>RAID-Z3</acronym> based on the number of parity devices in the array and the number of disks which can fail while the pool remains operational.
Component Translation Difference to current string
This translation Translated FreeBSD Doc (Archived)/books_handbook
The following string has the same context and source.
Translated FreeBSD Doc (Archived)/books_arch-handbook


No matching activity found.

Browse all component changes

Source information

Source string comment
(itstool) path: sect1/title
Source string location
String age
a year ago
Source string age
a year ago
Translation file
books/handbook.pot, string 7003