Translation

(itstool) path: sect2/para
Pool status is important. If a drive goes offline or a read, write, or checksum error is detected, the corresponding error count increases. The <command>status</command> output shows the configuration and status of each device in the pool and the status of the entire pool. Actions that need to be taken and details about the last <link linkend="zfs-zpool-scrub"><command>scrub</command></link> are also shown.
126/4100
Context English Chinese (Simplified) (zh_CN) State
A pool that is no longer needed can be destroyed so that the disks can be reused. Destroying a pool involves first unmounting all of the datasets in that pool. If the datasets are in use, the unmount operation will fail and the pool will not be destroyed. The destruction of the pool can be forced with <option>-f</option>, but this can cause undefined behavior in applications which had open files on those datasets. 不需要的存储池可以删除以获得更多可用空间。删除一个存储池需要先卸载所有该存储池上的数据集。若数据集在使用中,卸载操作不会完成,存储池也不会被删除。可以使用 <option>-f</option> 选项来强制卸载数据集,但正在运行的应用程序可能会对数据集做出未定义的操作。
Adding and Removing Devices 添加和移除设备
There are two cases for adding disks to a zpool: attaching a disk to an existing vdev with <command>zpool attach</command>, or adding vdevs to the pool with <command>zpool add</command>. Only some <link linkend="zfs-term-vdev">vdev types</link> allow disks to be added to the vdev after creation. 添加磁盘到zpool中有两种情况:用 <command>zpool attach</command> 命令将磁盘加入一个现有的vdev中。只有部分 <link linkend="zfs-term-vdev">vdev types</link> 允许在vdev建立之后加入磁盘。
A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <link linkend="zfs-term-copies">copies</link> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or <acronym>RAID-Z</acronym>. Starting with a pool consisting of a single disk vdev, <command>zpool attach</command> can be used to add an additional disk to the vdev, creating a mirror. <command>zpool attach</command> can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second, <command>gpart backup</command> and <command>gpart restore</command> can be used to make this process easier. 由单一磁盘建立的存储池缺乏冗余备份功能,可以检测到数据损坏单无法修复,因为数据没有其他备份。备份(<link linkend="zfs-term-copies">copies</link>)属性可以让您从较小的故障(比如磁盘坏道)。单无法提供与镜像和 <acronym>RAID-Z</acronym> 同样级别的保护。由单一磁盘所建立的存储池可用 <command>zpool attach</command> 加入新设备,建里镜像。<command>zpool attach</command> 也可用来加入额外的磁盘到镜像群组以增加备份和读取速度。若使用的磁盘已有分区,可以复制给磁盘分区到另一个,使用 <command>gpart backup</command> 与 <command>gpart restore</command> 可以让这个过程更简单。
Upgrade the single disk (stripe) vdev <replaceable>ada0p3</replaceable> to a mirror by attaching <replaceable>ada1p3</replaceable>: 加入 <replaceable>ada0p3</replaceable> 来升级单一磁盘串通(strip)vdev <replaceable>ada1p3</replaceable> 采用镜像型态:
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
ada0p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool attach <replaceable>mypool</replaceable> <replaceable>ada0p3</replaceable> <replaceable>ada1p3</replaceable></userinput>
Make sure to wait until resilver is done before rebooting.

If you boot from pool 'mypool', you may need to update
boot code on newly attached disk 'ada1p3'.

Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
<prompt>#</prompt> <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada1</replaceable></userinput>
bootcode written to ada1
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri May 30 08:19:19 2014
527M scanned out of 781M at 47.9M/s, 0h0m to go
527M resilvered, 67.53% done
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0 (resilvering)

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:15:58 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
ada0p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool attach <replaceable>mypool</replaceable> <replaceable>ada0p3</replaceable> <replaceable>ada1p3</replaceable></userinput>
Make sure to wait until resilver is done before rebooting.

If you boot from pool 'mypool', you may need to update
boot code on newly attached disk 'ada1p3'.

Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
<prompt>#</prompt> <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada1</replaceable></userinput>
bootcode written to ada1
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri May 30 08:19:19 2014
527M scanned out of 781M at 47.9M/s, 0h0m to go
527M resilvered, 67.53% done
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0 (resilvering)

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:15:58 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors
When adding disks to the existing vdev is not an option, as for <acronym>RAID-Z</acronym>, an alternative method is to add another vdev to the pool. Additional vdevs provide higher performance, distributing writes across the vdevs. Each vdev is responsible for providing its own redundancy. It is possible, but discouraged, to mix vdev types, like <literal>mirror</literal> and <literal>RAID-Z</literal>. Adding a non-redundant vdev to a pool containing mirror or <acronym>RAID-Z</acronym> vdevs risks the data on the entire pool. Writes are distributed, so the failure of the non-redundant disk will result in the loss of a fraction of every block that has been written to the pool. 若不想加入磁盘到现有的vdev,对<acronym>RAID-Z</acronym>来说,可选择另一种方式:加入到另一个vdev存储池。额外的vdev可以提供更好的性能,分散写入数据到vdev之间每个vdev成员负责自己的备份。也可以使用不同的vdev形态,但不建议这么做,例如混合使用<literal>mirror</literal>和<literal>RAID-Z</literal>加入到另一个无备份的vdev到一个含有mirror或RAID-Z vdev的存储池会让数据损坏的风险扩大整个存储池,由于分散写入数据,若在无备份的磁盘上发生故障的结果便是遗失大半写到存储池的数据。
Data is striped across each of the vdevs. For example, with two mirror vdevs, this is effectively a <acronym>RAID</acronym> 10 that stripes writes across two sets of mirrors. Space is allocated so that each vdev reaches 100% full at the same time. There is a performance penalty if the vdevs have different amounts of free space, as a disproportionate amount of the data is written to the less full vdev. 在每个vdev 间的数据是串连的,例如,有两个mirror vdev,便跟 <acronym>RAID</acronym> 10 一样在两个mirror 间分散写入数据,且会做空间的分配,因此 vdev 会在同时达到全满100% 的用量。若 vdev 间的可用空间量不同则会影响到效能,因为数据量会不成比例的写入到使用量较少的 vdev。
When attaching additional devices to a boot pool, remember to update the bootcode. 当连接额外的设备到一个可以开机的存储池,要记得更新 Bootcode。
Attach a second mirror group (<filename>ada2p3</filename> and <filename>ada3p3</filename>) to the existing mirror: 连接第二个mirror群组(<filename>ada2p3</filename> 和 <filename>ada3p3</filename>)到现有的mirror:
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:19:35 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool add <replaceable>mypool</replaceable> mirror <replaceable>ada2p3</replaceable> <replaceable>ada3p3</replaceable></userinput>
<prompt>#</prompt> <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada2</replaceable></userinput>
bootcode written to ada2
<prompt>#</prompt> <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada3</replaceable></userinput>
bootcode written to ada3
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Fri May 30 08:19:35 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool add <replaceable>mypool</replaceable> mirror <replaceable>ada2p3</replaceable> <replaceable>ada3p3</replaceable></userinput>
<prompt>#</prompt> <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada2</replaceable></userinput>
bootcode written to ada2
<prompt>#</prompt> <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada3</replaceable></userinput>
bootcode written to ada3
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0

errors: No known data errors
Currently, vdevs cannot be removed from a pool, and disks can only be removed from a mirror if there is enough remaining redundancy. If only one disk in a mirror group remains, it ceases to be a mirror and reverts to being a stripe, risking the entire pool if that remaining disk fails. 现在已无法从存储上移除 vdev,且磁盘只能够在有足够剩余空间的情况下从 mirror 移除,若在 mirror 群组中只剩下一个磁盘,便会取消 mirror 然后还原为 stripe,若剩下的那个磁盘故障,便会影响到整个存储池。
Remove a disk from a three-way mirror group: 从一个三方 mirror 群组移除一个磁盘:
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool detach <replaceable>mypool</replaceable> <replaceable>ada2p3</replaceable></userinput>
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool detach <replaceable>mypool</replaceable> <replaceable>ada2p3</replaceable></userinput>
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Fri May 30 08:29:51 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors
Checking the Status of a Pool 检擦存储池状态
Pool status is important. If a drive goes offline or a read, write, or checksum error is detected, the corresponding error count increases. The <command>status</command> output shows the configuration and status of each device in the pool and the status of the entire pool. Actions that need to be taken and details about the last <link linkend="zfs-zpool-scrub"><command>scrub</command></link> are also shown. 存储池的状态很重要,若有磁盘机离线或侦测到读取、写入或校验码(Checksum)错误,对应的错误计数便会增加。 status 会显示存储池中每一个磁盘机的设定与状态及整个存储池的状态。需要处置的方式与有关最近清洁(Scrub)S的详细资讯也会一并显示。
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 2h25m with 0 errors on Sat Sep 14 04:25:50 2013
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0
ada4p3 ONLINE 0 0 0
ada5p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: scrub repaired 0 in 2h25m with 0 errors on Sat Sep 14 04:25:50 2013
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0
ada4p3 ONLINE 0 0 0
ada5p3 ONLINE 0 0 0

errors: No known data errors
Clearing Errors 排除错误
When an error is detected, the read, write, or checksum counts are incremented. The error message can be cleared and the counts reset with <command>zpool clear <replaceable>mypool</replaceable></command>. Clearing the error state can be important for automated scripts that alert the administrator when the pool encounters an error. Further errors may not be reported if the old errors are not cleared. 当侦测到错误发生,读取、写入或校验码(Checksum)的计数便会增加。使用 <command>zpool clear <replaceable>mypool</replaceable></command> 可以清除错误讯息及重置计数。清空错误状态对当存储发生错误要使用自动化 Script 通知的管理者来说会很重要,因在旧的错误尚未清除前不会回报后续的错误。
Replacing a Functioning Device 更换正在运行的设备
There are a number of situations where it may be desirable to replace one disk with a different disk. When replacing a working disk, the process keeps the old disk online during the replacement. The pool never enters a <link linkend="zfs-term-degraded">degraded</link> state, reducing the risk of data loss. <command>zpool replace</command> copies all of the data from the old disk to the new one. After the operation completes, the old disk is disconnected from the vdev. If the new disk is larger than the old disk, it may be possible to grow the zpool, using the new space. See <link linkend="zfs-zpool-online">Growing a Pool</link>. 可能有一些情况会需要更换磁盘为另一个磁盘,当要更换运作中的磁盘,此程序会维持旧有的磁盘在更换的过程为上线的状态,存储池不会进入降级(<link linkend="zfs-term-degraded">degraded</link>)的状态,来减少数据遗失的风险。<command>zpool replace</command> 会复制所有旧磁盘的数据到新磁盘,操作完成之后旧磁盘便会与 vdev 中断连线。若新磁盘容量较旧磁盘大,也可以会增加存储池来使用新的空间,请参考 <link linkend="zfs-zpool-online">扩展存储池</link>。
Replace a functioning device in the pool: 更换存储池中正在运行的设备:
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool replace <replaceable>mypool</replaceable> <replaceable>ada1p3</replaceable> <replaceable>ada2p3</replaceable></userinput>
Make sure to wait until resilver is done before rebooting.

If you boot from pool 'zroot', you may need to update
boot code on newly attached disk 'ada2p3'.

Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
<prompt>#</prompt> <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada2</replaceable></userinput>
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Jun 2 14:21:35 2014
604M scanned out of 781M at 46.5M/s, 0h0m to go
604M resilvered, 77.39% done
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
replacing-1 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0 (resilvering)

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Mon Jun 2 14:21:52 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool replace <replaceable>mypool</replaceable> <replaceable>ada1p3</replaceable> <replaceable>ada2p3</replaceable></userinput>
Make sure to wait until resilver is done before rebooting.

If you boot from pool 'zroot', you may need to update
boot code on newly attached disk 'ada2p3'.

Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
<prompt>#</prompt> <userinput>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <replaceable>ada2</replaceable></userinput>
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Jun 2 14:21:35 2014
604M scanned out of 781M at 46.5M/s, 0h0m to go
604M resilvered, 77.39% done
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
replacing-1 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0 (resilvering)

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Mon Jun 2 14:21:52 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0

errors: No known data errors
Dealing with Failed Devices 处理故障设备
When a disk in a pool fails, the vdev to which the disk belongs enters the <link linkend="zfs-term-degraded">degraded</link> state. All of the data is still available, but performance may be reduced because missing data must be calculated from the available redundancy. To restore the vdev to a fully functional state, the failed physical device must be replaced. <acronym>ZFS</acronym> is then instructed to begin the <link linkend="zfs-term-resilver">resilver</link> operation. Data that was on the failed device is recalculated from available redundancy and written to the replacement device. After completion, the vdev returns to <link linkend="zfs-term-online">online</link> status. 当存储池中的磁盘故障,该故障硬碟所属的 vdev 便会进入降级(<link linkend="zfs-term-degraded">Degraded</link>)状态,所有的数据仍可使用,但性能可能会降低,因为遗失的数据必须从可用的备份数据计算才能取得。要将 vdev 恢复完整运作的状态必须更换故障的实体设备。然后 <acronym>ZFS</acronym> 便会开始修复(<link linkend="zfs-term-resilver">resilver</link>,古代镜子的修复称 Resilver)作业,会从可用的备援数据计算出故障磁盘中的数据并写入到替换的设备上。完成后 vdev 便会重新返回上线(<link linkend="zfs-term-online">online</link>)的状态。
If the vdev does not have any redundancy, or if multiple devices have failed and there is not enough redundancy to compensate, the pool enters the <link linkend="zfs-term-faulted">faulted</link> state. If a sufficient number of devices cannot be reconnected to the pool, the pool becomes inoperative and data must be restored from backups. 若 vdev 没有任何备份数据或有多个设备故障,没有足够的备援数据可以补偿,存储便会进入故障(<link linkend="zfs-term-faulted">faulted</link>)的状态。
When replacing a failed disk, the name of the failed disk is replaced with the <acronym>GUID</acronym> of the device. A new device name parameter for <command>zpool replace</command> is not required if the replacement device has the same device name. 更换故障的磁盘时,故障磁盘的名称会更换为设备的 <acronym>GUID</acronym>,若替换设备要使用相同的设备名称,则在 <command>zpool replace</command> 不须加上新设备名称参数。
Replace a failed disk using <command>zpool replace</command>: 使用 <command>zpool replace</command> 更换故障的磁盘:
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
ada0p3 ONLINE 0 0 0
316502962686821739 UNAVAIL 0 0 0 was /dev/ada1p3

errors: No known data errors
<prompt>#</prompt> <userinput>zpool replace <replaceable>mypool</replaceable> <replaceable>316502962686821739</replaceable> <replaceable>ada2p3</replaceable></userinput>
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Jun 2 14:52:21 2014
641M scanned out of 781M at 49.3M/s, 0h0m to go
640M resilvered, 82.04% done
config:

NAME STATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
ada0p3 ONLINE 0 0 0
replacing-1 UNAVAIL 0 0 0
15732067398082357289 UNAVAIL 0 0 0 was /dev/ada1p3/old
ada2p3 ONLINE 0 0 0 (resilvering)

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Mon Jun 2 14:52:38 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
ada0p3 ONLINE 0 0 0
316502962686821739 UNAVAIL 0 0 0 was /dev/ada1p3

errors: No known data errors
<prompt>#</prompt> <userinput>zpool replace <replaceable>mypool</replaceable> <replaceable>316502962686821739</replaceable> <replaceable>ada2p3</replaceable></userinput>
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Jun 2 14:52:21 2014
641M scanned out of 781M at 49.3M/s, 0h0m to go
640M resilvered, 82.04% done
config:

NAME STATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
ada0p3 ONLINE 0 0 0
replacing-1 UNAVAIL 0 0 0
15732067398082357289 UNAVAIL 0 0 0 was /dev/ada1p3/old
ada2p3 ONLINE 0 0 0 (resilvering)

errors: No known data errors
<prompt>#</prompt> <userinput>zpool status</userinput>
pool: mypool
state: ONLINE
scan: resilvered 781M in 0h0m with 0 errors on Mon Jun 2 14:52:38 2014
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0

errors: No known data errors
Scrubbing a Pool 清理存储池
It is recommended that pools be <link linkend="zfs-term-scrub">scrubbed</link> regularly, ideally at least once every month. The <command>scrub</command> operation is very disk-intensive and will reduce performance while running. Avoid high-demand periods when scheduling <command>scrub</command> or use <link linkend="zfs-advanced-tuning-scrub_delay"><varname>vfs.zfs.scrub_delay</varname></link> to adjust the relative priority of the <command>scrub</command> to prevent it interfering with other workloads. 建议存储池要定期清理(<link linkend="zfs-term-scrub">scrubbed</link>),最好是每一个月清洁一次。 <command>scrub</command> 操作对磁盘操作非常的密集,在执行时会降低磁盘的效能。在计划执行 <command>scrub</command> 时避免在使用高峰的时期,或使用 <link linkend="zfs-advanced-tuning-scrub_delay"><varname>vfs.zfs.scrub_delay</varname></link> 来调整 scrub 的相对优先权来避免影响其他的工作。

Loading…

No matching activity found.

Browse all component changes

Things to check

XML markup

XML tags in translation do not match source

Reset

Glossary

English Chinese (Simplified) (zh_CN)
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect2/para
Source string location
book.translate.xml:40156
String age
a year ago
Source string age
a year ago
Translation file
books/zh_CN/handbook.po, string 6690