(itstool) path: sect1/title
Additional Resources
Context English Chinese (Simplified) (zh_CN) State
<emphasis><varname>vfs.zfs.resilver_delay</varname></emphasis> - Number of milliseconds of delay inserted between each I/O during a <link linkend="zfs-term-resilver">resilver</link>. To ensure that a resilver does not interfere with the normal operation of the pool, if any other I/O is happening the resilver will delay between each command. This value controls the limit of total <acronym>IOPS</acronym> (I/Os Per Second) generated by the resilver. The granularity of the setting is determined by the value of <varname>kern.hz</varname> which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective <acronym>IOPS</acronym> limit. The default value is 2, resulting in a limit of: 1000 ticks/sec / 2 = 500 <acronym>IOPS</acronym>. Returning the pool to an <link linkend="zfs-term-online">Online</link> state may be more important if another device failing could <link linkend="zfs-term-faulted">Fault</link> the pool, causing data loss. A value of 0 will give the resilver operation the same priority as other operations, speeding the healing process. The speed of resilver is only limited when there has been other recent activity on the pool, as determined by <link linkend="zfs-advanced-tuning-scan_idle"><varname>vfs.zfs.scan_idle</varname></link>. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<emphasis><varname>vfs.zfs.scan_idle</varname></emphasis> - Number of milliseconds since the last operation before the pool is considered idle. When the pool is idle the rate limiting for <link linkend="zfs-term-scrub"><command>scrub</command></link> and <link linkend="zfs-term-resilver">resilver</link> are disabled. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. <emphasis><varname>vfs.zfs.scan_idle</varname></emphasis> - 存储池在多少毫秒后视为空闲状态。当存储池处于空闲状态时,将禁用 <link linkend="zfs-term-scrub"><command>scrub</command></link> 和 <link linkend="zfs-term-resilver">resilver</link> 的速率限制。这个值可以通过<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>随时调整。
<emphasis><varname>vfs.zfs.txg.timeout</varname></emphasis> - Maximum number of seconds between <link linkend="zfs-term-txg">transaction group</link>s. The current transaction group will be written to the pool and a fresh transaction group started if this amount of time has elapsed since the previous transaction group. A transaction group my be triggered earlier if enough data is written. The default value is 5 seconds. A larger value may improve read performance by delaying asynchronous writes, but this may cause uneven performance when the transaction group is written. This value can be adjusted at any time with <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
<acronym>ZFS</acronym> on i386 i386 上的<acronym>ZFS</acronym>
Some of the features provided by <acronym>ZFS</acronym> are memory intensive, and may require tuning for maximum efficiency on systems with limited <acronym>RAM</acronym>. <acronym>ZFS</acronym>提供的一些功能是内存密集型的,可能需要在<acronym>内存</acronym>有限的系统上进行调优以实现最高效率。
Memory 内存
As a bare minimum, the total system memory should be at least one gigabyte. The amount of recommended <acronym>RAM</acronym> depends upon the size of the pool and which <acronym>ZFS</acronym> features are used. A general rule of thumb is 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, a general rule of thumb is 5 GB of RAM per TB of storage to be deduplicated. While some users successfully use <acronym>ZFS</acronym> with less <acronym>RAM</acronym>, systems under heavy load may panic due to memory exhaustion. Further tuning may be required for systems with less than the recommended RAM requirements. 最低需求,总系统内存应至少有 1 GB,建议的 <acronym>RAM</acronym> 量需视存储池的大小以及使用的 <acronym>ZFS</acronym> 功能而定。一般的经验法则是每 1 TB 的储存空间需要 1 GB 的 RAM,若有开启去重复的功能,一般的经验法则是每 1 TB 的要做去重复的储存空间需要 5 GB 的 RAM。虽然有部份使用者成功使用较少的 <acronym>RAM</acronym> 来运作 <acronym>ZFS</acronym>,但系统在负载较重时有可能会因为记忆用耗而导致当机,对于要使用低于建议 RAM 需求量来运作的系统可能会需要更进一步的调校。
Kernel Configuration 内核配置
Due to the address space limitations of the <trademark>i386</trademark> platform, <acronym>ZFS</acronym> users on the <trademark>i386</trademark> architecture must add this option to a custom kernel configuration file, rebuild the kernel, and reboot: 由于在 <trademark>i386</trademark> 平台上位址空间的限制,在 <trademark>i386</trademark> 架构上的 <acronym>ZFS</acronym> 使用者必须加入这个选项到自订核心配置文件,重新编译核心并重新开启:
options KVA_PAGES=512 options KVA_PAGES=512
This expands the kernel address space, allowing the <varname>vm.kvm_size</varname> tunable to be pushed beyond the currently imposed limit of 1 GB, or the limit of 2 GB for <acronym>PAE</acronym>. To find the most suitable value for this option, divide the desired address space in megabytes by four. In this example, it is <literal>512</literal> for 2 GB. 这个选项会增加核心位址空间,允许调整 <varname>vm.kvm_size</varname> 超出目前的 1 GB 限制或在 <acronym>PAE</acronym> 的 2 GB 限制.要找到这个选项最合适的数值,可以将想要的位址空间换算成 MB 然后除以 4,在本例中,以 2 GB 计算后即为 <literal>512</literal>。
Loader Tunables 载入程序可调参数
The <filename>kmem</filename> address space can be increased on all FreeBSD architectures. On a test system with 1 GB of physical memory, success was achieved with these options added to <filename>/boot/loader.conf</filename>, and the system restarted: 在所有的FreeBSD 架构上均可增加<filename>kmem</filename> 位址空间,经测试在一个1 GB 实体内存的测试系统上,加入以下选项到<filename>/boot/loader.conf</filename>,重新开启系统,可成功设定:
For a more detailed list of recommendations for <acronym>ZFS</acronym>-related tuning, see <link xlink:href=""/>. 要获取更多详细的 <acronym>ZFS</acronym> 相关调校的建议清单,请参考 <link xlink:href=""/>。
Additional Resources 更多资源
<link xlink:href="">OpenZFS</link> <link xlink:href="">OpenZFS</link>
<link xlink:href="">FreeBSD Wiki - <acronym>ZFS</acronym> Tuning</link> <link xlink:href="">FreeBSD Wiki - <acronym>ZFS</acronym> Tuning</link>
<link xlink:href="">Oracle Solaris <acronym>ZFS</acronym> Administration Guide</link> <link xlink:href="">Oracle Solaris <acronym>ZFS</acronym> Administration Guide</link>
<link xlink:href="">Calomel Blog - <acronym>ZFS</acronym> Raidz Performance, Capacity and Integrity</link> <link xlink:href="">Calomel Blog - <acronym>ZFS</acronym> Raidz 的性能、容量和完整性</link>
<acronym>ZFS</acronym> Features and Terminology <acronym>ZFS</acronym> 特性与术语
<acronym>ZFS</acronym> is a fundamentally different file system because it is more than just a file system. <acronym>ZFS</acronym> combines the roles of file system and volume manager, enabling additional storage devices to be added to a live system and having the new space available on all of the existing file systems in that pool immediately. By combining the traditionally separate roles, <acronym>ZFS</acronym> is able to overcome previous limitations that prevented <acronym>RAID</acronym> groups being able to grow. Each top level device in a pool is called a <emphasis>vdev</emphasis>, which can be a simple disk or a <acronym>RAID</acronym> transformation such as a mirror or <acronym>RAID-Z</acronym> array. <acronym>ZFS</acronym> file systems (called <emphasis>datasets</emphasis>) each have access to the combined free space of the entire pool. As blocks are allocated from the pool, the space available to each file system decreases. This approach avoids the common pitfall with extensive partitioning where free space becomes fragmented across the partitions. <acronym>ZFS</acronym> 是一个从本质上与众不同的文件系统,由于它并非只是一个文件系统,<acronym>ZFS</acronym> 结合了文件系统及磁盘区管理程序,让额外的储存设备可以即时的加入到系统并可让既有的文件系统立即使用这些在存储池中空间。透过结合传统区分为二的两个角色,<acronym>ZFS</acronym> 能够克服以往 <acronym>RAID</acronym> 磁盘群组无法扩充的限制。每个在存储池顶层的设备称作<emphasis>vdev</emphasis>,其可以是一个简单的磁盘或是一个<acronym>RAID</acronym> 如镜像或<acronym>RAID-Z</acronym > 阵列。 <acronym>ZFS</acronym> 的文件系统(称作<emphasis>数据集(Dataset)</emphasis>)每一个数据集均可存取整个存池所共通的可用空间,随着使用存储池来配置空间区块,存储池能给每个文件系统使用的可用空间就会减少,这个方法可以避免扩大分割区会使的可用空间分散分割区之间的常见问题。
pool 存储池(Pool)
A storage <emphasis>pool</emphasis> is the most basic building block of <acronym>ZFS</acronym>. A pool is made up of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes). These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a <acronym>GUID</acronym>. The features available are determined by the <acronym>ZFS</acronym> version number on the pool. 存储池(<emphasis>pool</emphasis>)是 <acronym>ZFS</acronym> 最基础的组成部分。存储池由一个或多个 vdev 组成,其下方设备负责存储数据。存储池上可创建一个或多个文件系统(数据集 datasets)或块设备(卷 volumes)。这些数据集和卷共享存储池用的剩余可用空间。存储池可用名字或<acronym>GUID</acronym>标记。存储池的功能由存储池自身的<acronym>ZFS</acronym>版本号决定。
vdev Types vdev 类型(vdev Types)
<emphasis>Disk</emphasis> - The most basic type of vdev is a standard block device. This can be an entire disk (such as <filename><replaceable>/dev/ada0</replaceable></filename> or <filename><replaceable>/dev/da0</replaceable></filename>) or a partition (<filename><replaceable>/dev/ada0p3</replaceable></filename>). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation. <emphasis>磁盘(Disk)</emphasis> - 最基本的 vdev 型态便是一个标准的资料区块设备,这可以是一整个磁盘(例如<filename><replaceable>/dev/ada0</replaceable></filename> 或<filename><replaceable>/dev/da0</replaceable></filename>)或一个分割区(<filename><replaceable>/dev/ada0p3</replaceable></filename>)。在 FreeBSD 上,使用分割区来替代整个磁盘不会影响效能,这可能与 Solaris 说明文件所建议的有所不同。
Using an entire disk as part of a bootable pool is strongly discouraged, as this may render the pool unbootable. Likewise, you should not use an entire disk as part of a mirror or <acronym>RAID-Z</acronym> vdev. These are because it it impossible to reliably determine the size of an unpartitioned disk at boot time and because there's no place to put in boot code. 强烈建议不要将整个磁盘用作可引导存储池的一部分,因为这可能会使存储池无法启动。同样,不应将整个磁盘用作镜像或<acronym>RAID-Z</acronym> vdev 的一部分。这是因为在引导时无法可靠地确定未分区磁盘的大小,并且无法放入引导代码。
<emphasis>File</emphasis> - In addition to disks, <acronym>ZFS</acronym> pools can be backed by regular files, this is especially useful for testing and experimentation. Use the full path to the file as the device path in <command>zpool create</command>. All vdevs must be at least 128 MB in size. <emphasis>文件(File)</emphasis> - 除了磁盘外,<acronym>ZFS</acronym> 存储池可以使用一般文件为基础,这在测试与实验时特别有用。在 <command>zpool create</command> 时使用文件的完整路径作为设备路径。所有 vdev 必须至少有 128 MB 的大小。
<emphasis>Mirror</emphasis> - When creating a mirror, specify the <literal>mirror</literal> keyword followed by the list of member devices for the mirror. A mirror consists of two or more devices, all data will be written to all member devices. A mirror vdev will only hold as much data as its smallest member. A mirror vdev can withstand the failure of all but one of its members without losing any data. <emphasis>镜像(Mirror)</emphasis> - 要建立镜像,需使用 <literal>mirror</literal> 关键字,后面接着要做为该镜像成员设备的清单。一个镜像需要由两个或多个设备来组成,所有的资料都会被写入到所有的成员设备。镜像 vdev 可以对抗所有成员故障只剩其中一个而不损失任何资料。
A regular single disk vdev can be upgraded to a mirror vdev at any time with <command>zpool <link linkend="zfs-zpool-attach">attach</link></command>. 正常单一磁盘的 vdev 可以使用 <command>zpool <link linkend="zfs-zpool-attach">attach</link></command> 随时升级成为镜像 vdev。
<emphasis><acronym>RAID-Z</acronym></emphasis> - <acronym>ZFS</acronym> implements <acronym>RAID-Z</acronym>, a variation on standard <acronym>RAID-5</acronym> that offers better distribution of parity and eliminates the <quote><acronym>RAID-5</acronym> write hole</quote> in which the data and parity information become inconsistent after an unexpected restart. <acronym>ZFS</acronym> supports three levels of <acronym>RAID-Z</acronym> which provide varying levels of redundancy in exchange for decreasing levels of usable storage. The types are named <acronym>RAID-Z1</acronym> through <acronym>RAID-Z3</acronym> based on the number of parity devices in the array and the number of disks which can fail while the pool remains operational. <emphasis><acronym>RAID-Z</acronym></emphasis> - <acronym>ZFS</acronym> 实作了<acronym>RAID-Z</acronym>,以标准的<acronym>RAID-5</acronym> 修改而来,可提供奇偶校验(Parity)更佳的分散性并去除了<quote><acronym>RAID-5</acronym> write hole</quote> 导致在预期之外的重启后资料与奇偶校验资讯不一致的问题。<acronym>ZFS</acronym> 支持三个层级的<acronym>RAID-Z</acronym>,可提供不同程度的备援来换取减少不同程度的可用空间,类型的名称以阵列中奇偶校验设备的数量与存储池可以容许磁盘故障的数量来命名,从<acronym>RAID-Z1</acronym> 到<acronym>RAID-Z3</acronym>。
This translation Translated FreeBSD Doc/books_handbook 更多资源
The following string has the same context and source.
Translated FreeBSD Doc/books_arch-handbook 更多资源


Additional Resources
21 hours ago
Browse all component changes


English Chinese (Simplified) (zh_CN)
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect1/title
Source string location
String age
a year ago
Source string age
a year ago
Translation file
books/zh_CN/handbook.po, string 6684