Source string Read only

(itstool) path: sect3/para
252/2520
Context English State
<emphasis><varname>vfs.zfs.l2arc_write_max</varname></emphasis> - Limit the amount of data written to the <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link> per second. This tunable is designed to extend the longevity of <acronym>SSD</acronym>s by limiting the amount of data written to the device. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.l2arc_write_boost</varname></emphasis> - The value of this tunable is added to <link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link> and increases the write speed to the <acronym>SSD</acronym> until the first block is evicted from the <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>. This <quote>Turbo Warmup Phase</quote> is designed to reduce the performance loss from an empty <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link> after a reboot. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.scrub_delay</varname></emphasis> - Number of ticks to delay between each I/O during a <link linkend="zfs-term-scrub"><command>scrub</command></link>. To ensure that a <command>scrub</command> does not interfere with the normal operation of the pool, if any other <acronym>I/O</acronym> is happening the <command>scrub</command> will delay between each command. This value controls the limit on the total <acronym>IOPS</acronym> (I/Os Per Second) generated by the <command>scrub</command>. The granularity of the setting is determined by the value of <varname>kern.hz</varname> which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective <acronym>IOPS</acronym> limit. The default value is <literal>4</literal>, resulting in a limit of: 1000 ticks/sec / 4 = 250 <acronym>IOPS</acronym>. Using a value of <replaceable>20</replaceable> would give a limit of: 1000 ticks/sec / 20 = 50 <acronym>IOPS</acronym>. The speed of <command>scrub</command> is only limited when there has been recent activity on the pool, as determined by <link linkend="zfs-advanced-tuning-scan_idle"><varname>vfs.zfs.scan_idle</varname></link>. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.resilver_delay</varname></emphasis> - Number of milliseconds of delay inserted between each I/O during a <link linkend="zfs-term-resilver">resilver</link>. To ensure that a resilver does not interfere with the normal operation of the pool, if any other I/O is happening the resilver will delay between each command. This value controls the limit of total <acronym>IOPS</acronym> (I/Os Per Second) generated by the resilver. The granularity of the setting is determined by the value of <varname>kern.hz</varname> which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective <acronym>IOPS</acronym> limit. The default value is 2, resulting in a limit of: 1000 ticks/sec / 2 = 500 <acronym>IOPS</acronym>. Returning the pool to an <link linkend="zfs-term-online">Online</link> state may be more important if another device failing could <link linkend="zfs-term-faulted">Fault</link> the pool, causing data loss. A value of 0 will give the resilver operation the same priority as other operations, speeding the healing process. The speed of resilver is only limited when there has been other recent activity on the pool, as determined by <link linkend="zfs-advanced-tuning-scan_idle"><varname>vfs.zfs.scan_idle</varname></link>. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.scan_idle</varname></emphasis> - Number of milliseconds since the last operation before the pool is considered idle. When the pool is idle the rate limiting for <link linkend="zfs-term-scrub"><command>scrub</command></link> and <link linkend="zfs-term-resilver">resilver</link> are disabled. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.txg.timeout</varname></emphasis> - Maximum number of seconds between <link linkend="zfs-term-txg">transaction group</link>s. The current transaction group will be written to the pool and a fresh transaction group started if this amount of time has elapsed since the previous transaction group. A transaction group my be triggered earlier if enough data is written. The default value is 5 seconds. A larger value may improve read performance by delaying asynchronous writes, but this may cause uneven performance when the transaction group is written. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<acronym>ZFS</acronym> on i386
Some of the features provided by <acronym>ZFS</acronym> are memory intensive, and may require tuning for maximum efficiency on systems with limited <acronym>RAM</acronym>.
Memory
As a bare minimum, the total system memory should be at least one gigabyte. The amount of recommended <acronym>RAM</acronym> depends upon the size of the pool and which <acronym>ZFS</acronym> features are used. A general rule of thumb is 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, a general rule of thumb is 5 GB of RAM per TB of storage to be deduplicated. While some users successfully use <acronym>ZFS</acronym> with less <acronym>RAM</acronym>, systems under heavy load may panic due to memory exhaustion. Further tuning may be required for systems with less than the recommended RAM requirements.
Kernel Configuration
Due to the address space limitations of the <trademark>i386</trademark> platform, <acronym>ZFS</acronym> users on the <trademark>i386</trademark> architecture must add this option to a custom kernel configuration file, rebuild the kernel, and reboot:
options KVA_PAGES=512
This expands the kernel address space, allowing the <varname>vm.kvm_size</varname> tunable to be pushed beyond the currently imposed limit of 1 GB, or the limit of 2 GB for <acronym>PAE</acronym>. To find the most suitable value for this option, divide the desired address space in megabytes by four. In this example, it is <literal>512</literal> for 2 GB.
Loader Tunables
The <filename>kmem</filename> address space can be increased on all FreeBSD architectures. On a test system with 1 GB of physical memory, success was achieved with these options added to <filename>/boot/loader.conf</filename>, and the system restarted:
vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
For a more detailed list of recommendations for <acronym>ZFS</acronym>-related tuning, see <link xlink:href="https://wiki.freebsd.org/ZFSTuningGuide"/>.
Additional Resources
<link xlink:href="http://open-zfs.org">OpenZFS</link>
<link xlink:href="https://wiki.freebsd.org/ZFSTuningGuide">FreeBSD Wiki - <acronym>ZFS</acronym> Tuning</link>
<link xlink:href="http://docs.oracle.com/cd/E19253-01/819-5461/index.html">Oracle Solaris <acronym>ZFS</acronym> Administration Guide</link>
<link xlink:href="https://calomel.org/zfs_raid_speed_capacity.html">Calomel Blog - <acronym>ZFS</acronym> Raidz Performance, Capacity and Integrity</link>
<acronym>ZFS</acronym> Features and Terminology
<acronym>ZFS</acronym> is a fundamentally different file system because it is more than just a file system. <acronym>ZFS</acronym> combines the roles of file system and volume manager, enabling additional storage devices to be added to a live system and having the new space available on all of the existing file systems in that pool immediately. By combining the traditionally separate roles, <acronym>ZFS</acronym> is able to overcome previous limitations that prevented <acronym>RAID</acronym> groups being able to grow. Each top level device in a pool is called a <emphasis>vdev</emphasis>, which can be a simple disk or a <acronym>RAID</acronym> transformation such as a mirror or <acronym>RAID-Z</acronym> array. <acronym>ZFS</acronym> file systems (called <emphasis>datasets</emphasis>) each have access to the combined free space of the entire pool. As blocks are allocated from the pool, the space available to each file system decreases. This approach avoids the common pitfall with extensive partitioning where free space becomes fragmented across the partitions.
pool
A storage <emphasis>pool</emphasis> is the most basic building block of <acronym>ZFS</acronym>. A pool is made up of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes). These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a <acronym>GUID</acronym>. The features available are determined by the <acronym>ZFS</acronym> version number on the pool.
vdev Types
<emphasis>Disk</emphasis> - The most basic type of vdev is a standard block device. This can be an entire disk (such as <filename><replaceable>/dev/ada0</replaceable></filename> or <filename><replaceable>/dev/da0</replaceable></filename>) or a partition (<filename><replaceable>/dev/ada0p3</replaceable></filename>). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation.
Using an entire disk as part of a bootable pool is strongly discouraged, as this may render the pool unbootable. Likewise, you should not use an entire disk as part of a mirror or <acronym>RAID-Z</acronym> vdev. These are because it it impossible to reliably determine the size of an unpartitioned disk at boot time and because there's no place to put in boot code.
<emphasis>File</emphasis> - In addition to disks, <acronym>ZFS</acronym> pools can be backed by regular files, this is especially useful for testing and experimentation. Use the full path to the file as the device path in <command>zpool create</command>. All vdevs must be at least 128 MB in size.

Loading…

User avatar None

New source string

FreeBSD Doc / books_handbookEnglish

New source string 7 months ago
Browse all component changes

Glossary

English English
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect3/para
Flags
read-only
Source string location
book.translate.xml:40906
String age
7 months ago
Source string age
7 months ago
Translation file
books/handbook.pot, string 6681