Translation

(itstool) path: sect3/programlisting
vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
94/940
Context English Chinese (Simplified) (zh_CN) State
<emphasis><varname>vfs.zfs.l2arc_write_boost</varname></emphasis> - The value of this tunable is added to <link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link> and increases the write speed to the <acronym>SSD</acronym> until the first block is evicted from the <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>. This <quote>Turbo Warmup Phase</quote> is designed to reduce the performance loss from an empty <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link> after a reboot. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. 该变量被添加到<link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link>中,在第一个块从<link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>中被驱逐前,增加<acronym>SSD</acronym>的写入速度。<quote>Turbo Warmup Phase</quote>用于减少系统重启后空<link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>的性能损失。该值可以随时使用<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>调整。
<emphasis><varname>vfs.zfs.scrub_delay</varname></emphasis> - Number of ticks to delay between each I/O during a <link linkend="zfs-term-scrub"><command>scrub</command></link>. To ensure that a <command>scrub</command> does not interfere with the normal operation of the pool, if any other <acronym>I/O</acronym> is happening the <command>scrub</command> will delay between each command. This value controls the limit on the total <acronym>IOPS</acronym> (I/Os Per Second) generated by the <command>scrub</command>. The granularity of the setting is determined by the value of <varname>kern.hz</varname> which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective <acronym>IOPS</acronym> limit. The default value is <literal>4</literal>, resulting in a limit of: 1000 ticks/sec / 4 = 250 <acronym>IOPS</acronym>. Using a value of <replaceable>20</replaceable> would give a limit of: 1000 ticks/sec / 20 = 50 <acronym>IOPS</acronym>. The speed of <command>scrub</command> is only limited when there has been recent activity on the pool, as determined by <link linkend="zfs-advanced-tuning-scan_idle"><varname>vfs.zfs.scan_idle</varname></link>. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.resilver_delay</varname></emphasis> - Number of milliseconds of delay inserted between each I/O during a <link linkend="zfs-term-resilver">resilver</link>. To ensure that a resilver does not interfere with the normal operation of the pool, if any other I/O is happening the resilver will delay between each command. This value controls the limit of total <acronym>IOPS</acronym> (I/Os Per Second) generated by the resilver. The granularity of the setting is determined by the value of <varname>kern.hz</varname> which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective <acronym>IOPS</acronym> limit. The default value is 2, resulting in a limit of: 1000 ticks/sec / 2 = 500 <acronym>IOPS</acronym>. Returning the pool to an <link linkend="zfs-term-online">Online</link> state may be more important if another device failing could <link linkend="zfs-term-faulted">Fault</link> the pool, causing data loss. A value of 0 will give the resilver operation the same priority as other operations, speeding the healing process. The speed of resilver is only limited when there has been other recent activity on the pool, as determined by <link linkend="zfs-advanced-tuning-scan_idle"><varname>vfs.zfs.scan_idle</varname></link>. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.scan_idle</varname></emphasis> - Number of milliseconds since the last operation before the pool is considered idle. When the pool is idle the rate limiting for <link linkend="zfs-term-scrub"><command>scrub</command></link> and <link linkend="zfs-term-resilver">resilver</link> are disabled. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. <emphasis><varname>vfs.zfs.scan_idle</varname></emphasis> - 存储池在多少毫秒后视为空闲状态。当存储池处于空闲状态时,将禁用 <link linkend="zfs-term-scrub"><command>scrub</command></link> 和 <link linkend="zfs-term-resilver">resilver</link> 的速率限制。这个值可以通过<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>随时调整。
<emphasis><varname>vfs.zfs.txg.timeout</varname></emphasis> - Maximum number of seconds between <link linkend="zfs-term-txg">transaction group</link>s. The current transaction group will be written to the pool and a fresh transaction group started if this amount of time has elapsed since the previous transaction group. A transaction group my be triggered earlier if enough data is written. The default value is 5 seconds. A larger value may improve read performance by delaying asynchronous writes, but this may cause uneven performance when the transaction group is written. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<acronym>ZFS</acronym> on i386 i386 上的<acronym>ZFS</acronym>
Some of the features provided by <acronym>ZFS</acronym> are memory intensive, and may require tuning for maximum efficiency on systems with limited <acronym>RAM</acronym>. <acronym>ZFS</acronym>提供的一些功能是内存密集型的,可能需要在<acronym>内存</acronym>有限的系统上进行调优以实现最高效率。
Memory 内存
As a bare minimum, the total system memory should be at least one gigabyte. The amount of recommended <acronym>RAM</acronym> depends upon the size of the pool and which <acronym>ZFS</acronym> features are used. A general rule of thumb is 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, a general rule of thumb is 5 GB of RAM per TB of storage to be deduplicated. While some users successfully use <acronym>ZFS</acronym> with less <acronym>RAM</acronym>, systems under heavy load may panic due to memory exhaustion. Further tuning may be required for systems with less than the recommended RAM requirements. 最低需求,总系统内存应至少有 1 GB,建议的 <acronym>RAM</acronym> 量需视存储池的大小以及使用的 <acronym>ZFS</acronym> 功能而定。一般的经验法则是每 1 TB 的储存空间需要 1 GB 的 RAM,若有开启去重复的功能,一般的经验法则是每 1 TB 的要做去重复的储存空间需要 5 GB 的 RAM。虽然有部份使用者成功使用较少的 <acronym>RAM</acronym> 来运作 <acronym>ZFS</acronym>,但系统在负载较重时有可能会因为记忆用耗而导致当机,对于要使用低于建议 RAM 需求量来运作的系统可能会需要更进一步的调校。
Kernel Configuration 内核配置
Due to the address space limitations of the <trademark>i386</trademark> platform, <acronym>ZFS</acronym> users on the <trademark>i386</trademark> architecture must add this option to a custom kernel configuration file, rebuild the kernel, and reboot: 由于在 <trademark>i386</trademark> 平台上位址空间的限制,在 <trademark>i386</trademark> 架构上的 <acronym>ZFS</acronym> 使用者必须加入这个选项到自订核心配置文件,重新编译核心并重新开启:
options KVA_PAGES=512 options KVA_PAGES=512
This expands the kernel address space, allowing the <varname>vm.kvm_size</varname> tunable to be pushed beyond the currently imposed limit of 1 GB, or the limit of 2 GB for <acronym>PAE</acronym>. To find the most suitable value for this option, divide the desired address space in megabytes by four. In this example, it is <literal>512</literal> for 2 GB. 这个选项会增加核心位址空间,允许调整 <varname>vm.kvm_size</varname> 超出目前的 1 GB 限制或在 <acronym>PAE</acronym> 的 2 GB 限制.要找到这个选项最合适的数值,可以将想要的位址空间换算成 MB 然后除以 4,在本例中,以 2 GB 计算后即为 <literal>512</literal>。
Loader Tunables 载入程序可调参数
The <filename>kmem</filename> address space can be increased on all FreeBSD architectures. On a test system with 1 GB of physical memory, success was achieved with these options added to <filename>/boot/loader.conf</filename>, and the system restarted: 在所有的FreeBSD 架构上均可增加<filename>kmem</filename> 位址空间,经测试在一个1 GB 实体内存的测试系统上,加入以下选项到<filename>/boot/loader.conf</filename>,重新开启系统,可成功设定:
vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
For a more detailed list of recommendations for <acronym>ZFS</acronym>-related tuning, see <link xlink:href="https://wiki.freebsd.org/ZFSTuningGuide"/>. 要获取更多详细的 <acronym>ZFS</acronym> 相关调校的建议清单,请参考 <link xlink:href="http://wiki.freebsd.org/ZFSTuningGuide"/>。
Additional Resources 更多资源
<link xlink:href="http://open-zfs.org">OpenZFS</link> <link xlink:href="http://open-zfs.org">OpenZFS</link>
<link xlink:href="https://wiki.freebsd.org/ZFSTuningGuide">FreeBSD Wiki - <acronym>ZFS</acronym> Tuning</link> <link xlink:href="https://wiki.freebsd.org/ZFSTuningGuide">FreeBSD Wiki - <acronym>ZFS</acronym> Tuning</link>
<link xlink:href="http://docs.oracle.com/cd/E19253-01/819-5461/index.html">Oracle Solaris <acronym>ZFS</acronym> Administration Guide</link> <link xlink:href="http://docs.oracle.com/cd/E19253-01/819-5461/index.html">Oracle Solaris <acronym>ZFS</acronym> Administration Guide</link>
<link xlink:href="https://calomel.org/zfs_raid_speed_capacity.html">Calomel Blog - <acronym>ZFS</acronym> Raidz Performance, Capacity and Integrity</link> <link xlink:href="https://calomel.org/zfs_raid_speed_capacity.html">Calomel Blog - <acronym>ZFS</acronym> Raidz 的性能、容量和完整性</link>
<acronym>ZFS</acronym> Features and Terminology <acronym>ZFS</acronym> 特性与术语
<acronym>ZFS</acronym> is a fundamentally different file system because it is more than just a file system. <acronym>ZFS</acronym> combines the roles of file system and volume manager, enabling additional storage devices to be added to a live system and having the new space available on all of the existing file systems in that pool immediately. By combining the traditionally separate roles, <acronym>ZFS</acronym> is able to overcome previous limitations that prevented <acronym>RAID</acronym> groups being able to grow. Each top level device in a pool is called a <emphasis>vdev</emphasis>, which can be a simple disk or a <acronym>RAID</acronym> transformation such as a mirror or <acronym>RAID-Z</acronym> array. <acronym>ZFS</acronym> file systems (called <emphasis>datasets</emphasis>) each have access to the combined free space of the entire pool. As blocks are allocated from the pool, the space available to each file system decreases. This approach avoids the common pitfall with extensive partitioning where free space becomes fragmented across the partitions. <acronym>ZFS</acronym> 是一个从本质上与众不同的文件系统,由于它并非只是一个文件系统,<acronym>ZFS</acronym> 结合了文件系统及磁盘区管理程序,让额外的储存设备可以即时的加入到系统并可让既有的文件系统立即使用这些在存储池中空间。透过结合传统区分为二的两个角色,<acronym>ZFS</acronym> 能够克服以往 <acronym>RAID</acronym> 磁盘群组无法扩充的限制。每个在存储池顶层的设备称作<emphasis>vdev</emphasis>,其可以是一个简单的磁盘或是一个<acronym>RAID</acronym> 如镜像或<acronym>RAID-Z</acronym > 阵列。 <acronym>ZFS</acronym> 的文件系统(称作<emphasis>数据集(Dataset)</emphasis>)每一个数据集均可存取整个存池所共通的可用空间,随着使用存储池来配置空间区块,存储池能给每个文件系统使用的可用空间就会减少,这个方法可以避免扩大分割区会使的可用空间分散分割区之间的常见问题。
pool 存储池(Pool)
A storage <emphasis>pool</emphasis> is the most basic building block of <acronym>ZFS</acronym>. A pool is made up of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes). These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a <acronym>GUID</acronym>. The features available are determined by the <acronym>ZFS</acronym> version number on the pool. 存储池(<emphasis>pool</emphasis>)是 <acronym>ZFS</acronym> 最基础的组成部分。存储池由一个或多个 vdev 组成,其下方设备负责存储数据。存储池上可创建一个或多个文件系统(数据集 datasets)或块设备(卷 volumes)。这些数据集和卷共享存储池用的剩余可用空间。存储池可用名字或<acronym>GUID</acronym>标记。存储池的功能由存储池自身的<acronym>ZFS</acronym>版本号决定。
vdev Types vdev 类型(vdev Types)
<emphasis>Disk</emphasis> - The most basic type of vdev is a standard block device. This can be an entire disk (such as <filename><replaceable>/dev/ada0</replaceable></filename> or <filename><replaceable>/dev/da0</replaceable></filename>) or a partition (<filename><replaceable>/dev/ada0p3</replaceable></filename>). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation. <emphasis>磁盘(Disk)</emphasis> - 最基本的 vdev 型态便是一个标准的资料区块设备,这可以是一整个磁盘(例如<filename><replaceable>/dev/ada0</replaceable></filename> 或<filename><replaceable>/dev/da0</replaceable></filename>)或一个分割区(<filename><replaceable>/dev/ada0p3</replaceable></filename>)。在 FreeBSD 上,使用分割区来替代整个磁盘不会影响效能,这可能与 Solaris 说明文件所建议的有所不同。
Using an entire disk as part of a bootable pool is strongly discouraged, as this may render the pool unbootable. Likewise, you should not use an entire disk as part of a mirror or <acronym>RAID-Z</acronym> vdev. These are because it it impossible to reliably determine the size of an unpartitioned disk at boot time and because there's no place to put in boot code. 强烈建议不要将整个磁盘用作可引导存储池的一部分,因为这可能会使存储池无法启动。同样,不应将整个磁盘用作镜像或<acronym>RAID-Z</acronym> vdev 的一部分。这是因为在引导时无法可靠地确定未分区磁盘的大小,并且无法放入引导代码。
<emphasis>File</emphasis> - In addition to disks, <acronym>ZFS</acronym> pools can be backed by regular files, this is especially useful for testing and experimentation. Use the full path to the file as the device path in <command>zpool create</command>. All vdevs must be at least 128 MB in size. <emphasis>文件(File)</emphasis> - 除了磁盘外,<acronym>ZFS</acronym> 存储池可以使用一般文件为基础,这在测试与实验时特别有用。在 <command>zpool create</command> 时使用文件的完整路径作为设备路径。所有 vdev 必须至少有 128 MB 的大小。
<emphasis>Mirror</emphasis> - When creating a mirror, specify the <literal>mirror</literal> keyword followed by the list of member devices for the mirror. A mirror consists of two or more devices, all data will be written to all member devices. A mirror vdev will only hold as much data as its smallest member. A mirror vdev can withstand the failure of all but one of its members without losing any data. <emphasis>镜像(Mirror)</emphasis> - 要建立镜像,需使用 <literal>mirror</literal> 关键字,后面接着要做为该镜像成员设备的清单。一个镜像需要由两个或多个设备来组成,所有的资料都会被写入到所有的成员设备。镜像 vdev 可以对抗所有成员故障只剩其中一个而不损失任何资料。

Loading…

No matching activity found.

Browse all component changes

Things to check

Unchanged translation

Source and translation are identical

Reset

Glossary

English Chinese (Simplified) (zh_CN)
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect3/programlisting
Flags
no-wrap
Source string location
book.translate.xml:40913
String age
a year ago
Source string age
a year ago
Translation file
books/zh_CN/handbook.po, string 6682