(itstool) path: sect2/title
<acronym>ZFS</acronym> on i386
Context English Chinese (Simplified) (zh_CN) State
<emphasis><varname>vfs.zfs.vdev.cache.size</varname></emphasis> - A preallocated amount of memory reserved as a cache for each device in the pool. The total amount of memory used will be this value multiplied by the number of devices. This value can only be adjusted at boot time, and is set in <filename>/boot/loader.conf</filename>. <emphasis><varname>vfs.zfs.vdev.cache.size</varname></emphasis> - 预先分配的内存量,作为池中每个设备的缓存。使用的内存总量将是这个值乘以设备数量。此值只能在操作系统启动时调整,可以在 <filename>/boot/loader.conf</filename>中设置。
<emphasis><varname>vfs.zfs.min_auto_ashift</varname></emphasis> - Minimum <varname>ashift</varname> (sector size) that will be used automatically at pool creation time. The value is a power of two. The default value of <literal>9</literal> represents <literal>2^9 = 512</literal>, a sector size of 512 bytes. To avoid <emphasis>write amplification</emphasis> and get the best performance, set this value to the largest sector size used by a device in the pool. <emphasis><varname>vfs.zfs.min_auto_ashift</varname></emphasis> - 在创建池时自动使用的最小<varname>ashift</varname>(扇区大小)。该值是二的幂。默认值<literal>9</literal>表示<literal>2^9 = 512</literal>,扇区大小为512字节。为了避免<emphasis>写放大</emphasis>并获得最佳性能,请将此值设置为池中设备使用的最大扇区大小。
Many drives have 4 KB sectors. Using the default <varname>ashift</varname> of <literal>9</literal> with these drives results in write amplification on these devices. Data that could be contained in a single 4 KB write must instead be written in eight 512-byte writes. <acronym>ZFS</acronym> tries to read the native sector size from all devices when creating a pool, but many drives with 4 KB sectors report that their sectors are 512 bytes for compatibility. Setting <varname>vfs.zfs.min_auto_ashift</varname> to <literal>12</literal> (<literal>2^12 = 4096</literal>) before creating a pool forces <acronym>ZFS</acronym> to use 4 KB blocks for best performance on these drives. 许多驱动器有4KB的扇区。在这些硬盘上使用默认的<varname>ashift</varname><literal>9</literal>,会导致这些设备上的写入量放大。原本可以包含在单个 4 KB 写入中的数据必须以 8 个 512 字节的写入方式写入。<acronym>ZFS</acronym>在创建池时,会尝试从所有设备中读取本机扇区大小,但许多具有4 KB扇区的驱动器报告说,为了兼容性,它们的扇区是512字节。在创建池之前,将 <varname>vfs.zfs.min_auto_ashift</varname>设置为 <literal>12</literal> (<literal>2^12 = 4096</literal>),可迫使 <acronym>ZFS</acronym>使用 4 KB 块,以获得这些驱动器上的最佳性能。
Forcing 4 KB blocks is also useful on pools where disk upgrades are planned. Future disks are likely to use 4 KB sectors, and <varname>ashift</varname> values cannot be changed after a pool is created. 强制使用 4KB 块在计划进行磁盘升级的池中也很有用。将来的磁盘可能会使用 4 KB 扇区,并且在创建池后无法更改<varname>ashift</varname>值。
In some specific cases, the smaller 512-byte block size might be preferable. When used with 512-byte disks for databases, or as storage for virtual machines, less data is transferred during small random reads. This can provide better performance, especially when using a smaller <acronym>ZFS</acronym> record size. 在某些特定情况下,使用较小的 512 字节块大小可能更合适。当与 512 字节的数据库磁盘一起使用时,或用作虚拟机的存储时,在小型随机读取期间传输的数据较少。这可以提供更好的性能,尤其是在使用较小的<acronym>ZFS</acronym>记录大小时。
<emphasis><varname>vfs.zfs.prefetch_disable</varname></emphasis> - Disable prefetch. A value of <literal>0</literal> is enabled and <literal>1</literal> is disabled. The default is <literal>0</literal>, unless the system has less than 4 GB of <acronym>RAM</acronym>. Prefetch works by reading larger blocks than were requested into the <link linkend="zfs-term-arc"><acronym>ARC</acronym></link> in hopes that the data will be needed soon. If the workload has a large number of random reads, disabling prefetch may actually improve performance by reducing unnecessary reads. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. <emphasis><varname>vfs.zfs.prefetch_disable</varname></emphasis> - 禁用预取。<literal>0</literal>为启用,<literal>1</literal>为禁用。默认值是<literal>0</literal>,除非系统的<acronym>RAM</acronym>小于4 GB。预取的工作方式是向 <link linkend="zfs-term-arc"><acronym>ARC</acronym></link>中读取比请求的数据块更大的数据块,希望很快就会需要这些数据。如果工作负载有大量的随机读取,禁用预取实际上可以通过减少不必要的读取来提高性能。这个值可以随时用<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>调整。
<emphasis><varname>vfs.zfs.vdev.trim_on_init</varname></emphasis> - Control whether new devices added to the pool have the <literal>TRIM</literal> command run on them. This ensures the best performance and longevity for <acronym>SSD</acronym>s, but takes extra time. If the device has already been secure erased, disabling this setting will make the addition of the new device faster. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. <emphasis>><varname>vfs.zfs.vdev.trim_on_init</varname></emphasis> - 是否启用存储池中磁盘的<literal>TRIM</literal>。这可以提升<acronym>SSD</acronym>的性能并延长其使用寿命。如果设备已经被安全擦除,禁用此设置将使新设备的添加速度更快。该值可通过<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>随时调整。
<emphasis><varname>vfs.zfs.vdev.max_pending</varname></emphasis> - Limit the number of pending I/O requests per device. A higher value will keep the device command queue full and may give higher throughput. A lower value will reduce latency. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. <emphasis><varname>vfs.zfs.vdev.max_pending</varname></emphasis> - 限制每个设备的挂起的I/O请求数量。较高的值将使设备的命令队列保持满员,并可能提供更高的吞吐量。较低的值会降低延迟。这个值可以通过<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>随时调整。
<emphasis><varname>vfs.zfs.top_maxinflight</varname></emphasis> - Maxmimum number of outstanding I/Os per top-level <link linkend="zfs-term-vdev">vdev</link>. Limits the depth of the command queue to prevent high latency. The limit is per top-level vdev, meaning the limit applies to each <link linkend="zfs-term-vdev-mirror">mirror</link>, <link linkend="zfs-term-vdev-raidz">RAID-Z</link>, or other vdev independently. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. <emphasis><varname>vfs.zfs.top_maxinflight</varname></emphasis> - 每个顶层<link linkend="zfs-term-vdev">vdev</link>的最大未决I/O数量。限制命令队列的深度以防止高延迟。这个限制是按顶层vdev设置的,也就是说这个限制适用于每个<link linkend="zfs-term-vdev-mirror">mirror</link>,<link linkend="zfs-term-vdev-raidz">RAID-Z</link>,或者其他vdev。这个值可以通过<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>随时调整。
<emphasis><varname>vfs.zfs.l2arc_write_max</varname></emphasis> - Limit the amount of data written to the <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link> per second. This tunable is designed to extend the longevity of <acronym>SSD</acronym>s by limiting the amount of data written to the device. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. <emphasis><varname>vfs.zfs.l2arc_write_max</varname></emphasis> - 限制 <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link> 的写入速度。此参数为延长 <acronym>SSD</acronym> 使用寿命设计。可使用 <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry> 随时调整该参数。
<emphasis><varname>vfs.zfs.l2arc_write_boost</varname></emphasis> - The value of this tunable is added to <link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link> and increases the write speed to the <acronym>SSD</acronym> until the first block is evicted from the <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>. This <quote>Turbo Warmup Phase</quote> is designed to reduce the performance loss from an empty <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link> after a reboot. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. 该变量被添加到<link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link>中,在第一个块从<link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>中被驱逐前,增加<acronym>SSD</acronym>的写入速度。<quote>Turbo Warmup Phase</quote>用于减少系统重启后空<link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>的性能损失。该值可以随时使用<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>调整。
<emphasis><varname>vfs.zfs.scrub_delay</varname></emphasis> - Number of ticks to delay between each I/O during a <link linkend="zfs-term-scrub"><command>scrub</command></link>. To ensure that a <command>scrub</command> does not interfere with the normal operation of the pool, if any other <acronym>I/O</acronym> is happening the <command>scrub</command> will delay between each command. This value controls the limit on the total <acronym>IOPS</acronym> (I/Os Per Second) generated by the <command>scrub</command>. The granularity of the setting is determined by the value of <varname>kern.hz</varname> which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective <acronym>IOPS</acronym> limit. The default value is <literal>4</literal>, resulting in a limit of: 1000 ticks/sec / 4 = 250 <acronym>IOPS</acronym>. Using a value of <replaceable>20</replaceable> would give a limit of: 1000 ticks/sec / 20 = 50 <acronym>IOPS</acronym>. The speed of <command>scrub</command> is only limited when there has been recent activity on the pool, as determined by <link linkend="zfs-advanced-tuning-scan_idle"><varname>vfs.zfs.scan_idle</varname></link>. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. <emphasis><varname>vfs.zfs.scrub_delay</varname></emphasis> - 在 <link linkend="zfs-term-scrub"><command>scrub</command></link>期间,每次 I/O 之间要延迟的 tick 数。为了确保 <command>scrub</command>不会干扰存储池的正常运行,如果有其他 <acronym>I/O</acronym>,<command>scrub</command>将在每个命令之间进行延迟。该值控制<command>scrub</command>的总<acronym>IOPS</acronym>(每秒I/O)。该设置的粒度由<varname>kern.hz</varname>的值决定,其默认值为每秒1000次。此设置可以更改,从而限制有效<acronym>IOPS</acronym>。默认值是<literal>4</literal>,因此限制为。1000 ticks/sec / 4 = 250 <acronym>IOPS</acronym>。使用<replaceable>20</replaceable>,限制为:1000ticks/sec/20=250 <acronym>IOPS</acronym>。1000 ticks/sec / 20 = 50 <acronym>IOPS</acronym>. <command>scrub</command>的速度只有在存储池上最近有活动时才会受到限制,这由<link linkend="zfs-advanced-tuning-scan_idle"><varname>vfs.zfs.scan_idle</varname></link>决定。该值可随时用<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>调整。
<emphasis><varname>vfs.zfs.resilver_delay</varname></emphasis> - Number of milliseconds of delay inserted between each I/O during a <link linkend="zfs-term-resilver">resilver</link>. To ensure that a resilver does not interfere with the normal operation of the pool, if any other I/O is happening the resilver will delay between each command. This value controls the limit of total <acronym>IOPS</acronym> (I/Os Per Second) generated by the resilver. The granularity of the setting is determined by the value of <varname>kern.hz</varname> which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective <acronym>IOPS</acronym> limit. The default value is 2, resulting in a limit of: 1000 ticks/sec / 2 = 500 <acronym>IOPS</acronym>. Returning the pool to an <link linkend="zfs-term-online">Online</link> state may be more important if another device failing could <link linkend="zfs-term-faulted">Fault</link> the pool, causing data loss. A value of 0 will give the resilver operation the same priority as other operations, speeding the healing process. The speed of resilver is only limited when there has been other recent activity on the pool, as determined by <link linkend="zfs-advanced-tuning-scan_idle"><varname>vfs.zfs.scan_idle</varname></link>. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.scan_idle</varname></emphasis> - Number of milliseconds since the last operation before the pool is considered idle. When the pool is idle the rate limiting for <link linkend="zfs-term-scrub"><command>scrub</command></link> and <link linkend="zfs-term-resilver">resilver</link> are disabled. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. <emphasis><varname>vfs.zfs.scan_idle</varname></emphasis> - 存储池在多少毫秒后视为空闲状态。当存储池处于空闲状态时,将禁用 <link linkend="zfs-term-scrub"><command>scrub</command></link> 和 <link linkend="zfs-term-resilver">resilver</link> 的速率限制。这个值可以通过<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>随时调整。
<emphasis><varname>vfs.zfs.txg.timeout</varname></emphasis> - Maximum number of seconds between <link linkend="zfs-term-txg">transaction group</link>s. The current transaction group will be written to the pool and a fresh transaction group started if this amount of time has elapsed since the previous transaction group. A transaction group my be triggered earlier if enough data is written. The default value is 5 seconds. A larger value may improve read performance by delaying asynchronous writes, but this may cause uneven performance when the transaction group is written. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<acronym>ZFS</acronym> on i386 i386 上的<acronym>ZFS</acronym>
Some of the features provided by <acronym>ZFS</acronym> are memory intensive, and may require tuning for maximum efficiency on systems with limited <acronym>RAM</acronym>. <acronym>ZFS</acronym>提供的一些功能是内存密集型的,可能需要在<acronym>内存</acronym>有限的系统上进行调优以实现最高效率。
Memory 内存
As a bare minimum, the total system memory should be at least one gigabyte. The amount of recommended <acronym>RAM</acronym> depends upon the size of the pool and which <acronym>ZFS</acronym> features are used. A general rule of thumb is 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, a general rule of thumb is 5 GB of RAM per TB of storage to be deduplicated. While some users successfully use <acronym>ZFS</acronym> with less <acronym>RAM</acronym>, systems under heavy load may panic due to memory exhaustion. Further tuning may be required for systems with less than the recommended RAM requirements. 最低需求,总系统内存应至少有 1 GB,建议的 <acronym>RAM</acronym> 量需视存储池的大小以及使用的 <acronym>ZFS</acronym> 功能而定。一般的经验法则是每 1 TB 的储存空间需要 1 GB 的 RAM,若有开启去重复的功能,一般的经验法则是每 1 TB 的要做去重复的储存空间需要 5 GB 的 RAM。虽然有部份使用者成功使用较少的 <acronym>RAM</acronym> 来运作 <acronym>ZFS</acronym>,但系统在负载较重时有可能会因为记忆用耗而导致当机,对于要使用低于建议 RAM 需求量来运作的系统可能会需要更进一步的调校。
Kernel Configuration 内核配置
Due to the address space limitations of the <trademark>i386</trademark> platform, <acronym>ZFS</acronym> users on the <trademark>i386</trademark> architecture must add this option to a custom kernel configuration file, rebuild the kernel, and reboot: 由于在 <trademark>i386</trademark> 平台上位址空间的限制,在 <trademark>i386</trademark> 架构上的 <acronym>ZFS</acronym> 使用者必须加入这个选项到自订核心配置文件,重新编译核心并重新开启:
options KVA_PAGES=512 options KVA_PAGES=512
This expands the kernel address space, allowing the <varname>vm.kvm_size</varname> tunable to be pushed beyond the currently imposed limit of 1 GB, or the limit of 2 GB for <acronym>PAE</acronym>. To find the most suitable value for this option, divide the desired address space in megabytes by four. In this example, it is <literal>512</literal> for 2 GB. 这个选项会增加核心位址空间,允许调整 <varname>vm.kvm_size</varname> 超出目前的 1 GB 限制或在 <acronym>PAE</acronym> 的 2 GB 限制.要找到这个选项最合适的数值,可以将想要的位址空间换算成 MB 然后除以 4,在本例中,以 2 GB 计算后即为 <literal>512</literal>。
Loader Tunables 载入程序可调参数
The <filename>kmem</filename> address space can be increased on all FreeBSD architectures. On a test system with 1 GB of physical memory, success was achieved with these options added to <filename>/boot/loader.conf</filename>, and the system restarted: 在所有的FreeBSD 架构上均可增加<filename>kmem</filename> 位址空间,经测试在一个1 GB 实体内存的测试系统上,加入以下选项到<filename>/boot/loader.conf</filename>,重新开启系统,可成功设定:
For a more detailed list of recommendations for <acronym>ZFS</acronym>-related tuning, see <link xlink:href=""/>. 要获取更多详细的 <acronym>ZFS</acronym> 相关调校的建议清单,请参考 <link xlink:href=""/>。
Additional Resources 更多资源
<link xlink:href="">OpenZFS</link> <link xlink:href="">OpenZFS</link>
<link xlink:href="">FreeBSD Wiki - <acronym>ZFS</acronym> Tuning</link> <link xlink:href="">FreeBSD Wiki - <acronym>ZFS</acronym> Tuning</link>
<link xlink:href="">Oracle Solaris <acronym>ZFS</acronym> Administration Guide</link> <link xlink:href="">Oracle Solaris <acronym>ZFS</acronym> Administration Guide</link>


<acronym>ZFS</acronym> on i386
i386 上的<acronym>ZFS</acronym>
11 months ago
Browse all component changes


English Chinese (Simplified) (zh_CN)
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect2/title
Source string location
String age
a year ago
Source string age
a year ago
Translation file
books/zh_CN/handbook.po, string 6672