The translation is temporarily closed for contributions due to maintenance, please come back later.

Translation

(itstool) path: row/entry
English
A pool is made up of one or more vdevs, which themselves can be a single disk or a group of disks, in the case of a <acronym>RAID</acronym> transform. When multiple vdevs are used, <acronym>ZFS</acronym> spreads data across the vdevs to increase performance and maximize usable space. <_:itemizedlist-1/>
Context English Chinese (Simplified) (zh_CN) State
pool 存储池(Pool)
A storage <emphasis>pool</emphasis> is the most basic building block of <acronym>ZFS</acronym>. A pool is made up of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes). These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a <acronym>GUID</acronym>. The features available are determined by the <acronym>ZFS</acronym> version number on the pool. 存储池(<emphasis>pool</emphasis>)是 <acronym>ZFS</acronym> 最基础的组成部分。存储池由一个或多个 vdev 组成,其下方设备负责存储数据。存储池上可创建一个或多个文件系统(数据集 datasets)或块设备(卷 volumes)。这些数据集和卷共享存储池用的剩余可用空间。存储池可用名字或<acronym>GUID</acronym>标记。存储池的功能由存储池自身的<acronym>ZFS</acronym>版本号决定。
vdev Types vdev 类型(vdev Types)
<emphasis>Disk</emphasis> - The most basic type of vdev is a standard block device. This can be an entire disk (such as <filename><replaceable>/dev/ada0</replaceable></filename> or <filename><replaceable>/dev/da0</replaceable></filename>) or a partition (<filename><replaceable>/dev/ada0p3</replaceable></filename>). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation. <emphasis>磁盘(Disk)</emphasis> - 最基本的 vdev 型态便是一个标准的资料区块设备,这可以是一整个磁盘(例如<filename><replaceable>/dev/ada0</replaceable></filename> 或<filename><replaceable>/dev/da0</replaceable></filename>)或一个分割区(<filename><replaceable>/dev/ada0p3</replaceable></filename>)。在 FreeBSD 上,使用分割区来替代整个磁盘不会影响效能,这可能与 Solaris 说明文件所建议的有所不同。
Using an entire disk as part of a bootable pool is strongly discouraged, as this may render the pool unbootable. Likewise, you should not use an entire disk as part of a mirror or <acronym>RAID-Z</acronym> vdev. These are because it is impossible to reliably determine the size of an unpartitioned disk at boot time and because there's no place to put in boot code. 强烈建议不要将整个磁盘用作可引导存储池的一部分,因为这可能会使存储池无法启动。同样,不应将整个磁盘用作镜像或<acronym>RAID-Z</acronym> vdev 的一部分。这是因为在引导时无法可靠地确定未分区磁盘的大小,并且无法放入引导代码。
<emphasis>File</emphasis> - In addition to disks, <acronym>ZFS</acronym> pools can be backed by regular files, this is especially useful for testing and experimentation. Use the full path to the file as the device path in <command>zpool create</command>. All vdevs must be at least 128 MB in size. <emphasis>文件(File)</emphasis> - 除了磁盘外,<acronym>ZFS</acronym> 存储池可以使用一般文件为基础,这在测试与实验时特别有用。在 <command>zpool create</command> 时使用文件的完整路径作为设备路径。所有 vdev 必须至少有 128 MB 的大小。
<emphasis>Mirror</emphasis> - When creating a mirror, specify the <literal>mirror</literal> keyword followed by the list of member devices for the mirror. A mirror consists of two or more devices, all data will be written to all member devices. A mirror vdev will only hold as much data as its smallest member. A mirror vdev can withstand the failure of all but one of its members without losing any data. <emphasis>镜像(Mirror)</emphasis> - 要建立镜像,需使用 <literal>mirror</literal> 关键字,后面接着要做为该镜像成员设备的清单。一个镜像需要由两个或多个设备来组成,所有的资料都会被写入到所有的成员设备。镜像 vdev 可以对抗所有成员故障只剩其中一个而不损失任何资料。
A regular single disk vdev can be upgraded to a mirror vdev at any time with <command>zpool <link linkend="zfs-zpool-attach">attach</link></command>. 正常单一磁盘的 vdev 可以使用 <command>zpool <link linkend="zfs-zpool-attach">attach</link></command> 随时升级成为镜像 vdev。
<emphasis><acronym>RAID-Z</acronym></emphasis> - <acronym>ZFS</acronym> implements <acronym>RAID-Z</acronym>, a variation on standard <acronym>RAID-5</acronym> that offers better distribution of parity and eliminates the <quote><acronym>RAID-5</acronym> write hole</quote> in which the data and parity information become inconsistent after an unexpected restart. <acronym>ZFS</acronym> supports three levels of <acronym>RAID-Z</acronym> which provide varying levels of redundancy in exchange for decreasing levels of usable storage. The types are named <acronym>RAID-Z1</acronym> through <acronym>RAID-Z3</acronym> based on the number of parity devices in the array and the number of disks which can fail while the pool remains operational. <emphasis><acronym>RAID-Z</acronym></emphasis> - <acronym>ZFS</acronym> 实作了<acronym>RAID-Z</acronym>,以标准的<acronym>RAID-5</acronym> 修改而来,可提供奇偶校验(Parity)更佳的分散性并去除了<quote><acronym>RAID-5</acronym> write hole</quote> 导致在预期之外的重启后资料与奇偶校验资讯不一致的问题。<acronym>ZFS</acronym> 支持三个层级的<acronym>RAID-Z</acronym>,可提供不同程度的备援来换取减少不同程度的可用空间,类型的名称以阵列中奇偶校验设备的数量与存储池可以容许磁盘故障的数量来命名,从<acronym>RAID-Z1</acronym> 到<acronym>RAID-Z3</acronym>。
In a <acronym>RAID-Z1</acronym> configuration with four disks, each 1 TB, usable storage is 3 TB and the pool will still be able to operate in degraded mode with one faulted disk. If an additional disk goes offline before the faulted disk is replaced and resilvered, all data in the pool can be lost. 在<acronym>RAID-Z1</acronym> 配置4 个磁盘,每个磁盘1 TB,可用的储存空间则为3 TB,且若其中一个磁盘故障仍可以降级(Degraded)的模式运作,若在故障磁盘尚未更换并修复(Resilver)之前又有磁盘故障,所有在存储池中的资料便会遗失。
In a <acronym>RAID-Z3</acronym> configuration with eight disks of 1 TB, the volume will provide 5 TB of usable space and still be able to operate with three faulted disks. <trademark>Sun</trademark> recommends no more than nine disks in a single vdev. If the configuration has more disks, it is recommended to divide them into separate vdevs and the pool data will be striped across them. 在 <acronym>RAID-Z3</acronym> 配置 8 个 1 TB 的磁盘,磁盘区将会可以提供 5 TB 的可用空间且在 3 个磁盘故障的情况下仍可运作. <trademark>Sun</trademark> 建议单一个 vdev 不要使用超过 9 个磁盘.若配置需要使用更多磁盘,建议分成两个 vdev,这样存储池的资料便会分散到这两个 vdev。
A configuration of two <acronym>RAID-Z2</acronym> vdevs consisting of 8 disks each would create something similar to a <acronym>RAID-60</acronym> array. A <acronym>RAID-Z</acronym> group's storage capacity is approximately the size of the smallest disk multiplied by the number of non-parity disks. Four 1 TB disks in <acronym>RAID-Z1</acronym> has an effective size of approximately 3 TB, and an array of eight 1 TB disks in <acronym>RAID-Z3</acronym> will yield 5 TB of usable space. 两个 <acronym>RAID-Z2</acronym> vdev 的配置,每个 vdev 由 8 个硬盘组成,将创建类似于 <acronym>RAID-60</acronym> 的阵列。一个 <acronym>RAID-Z</acronym> 的存储容量大约是最小硬盘的大小乘以非同位硬盘的数量。在<acronym>RAID-Z1</acronym>中,4个1TB的硬盘的有效容量大约为3TB,而在<acronym>RAID-Z3</acronym>中,8个1TB的硬盘组成的阵列可用空间为 5TB。
<emphasis>Spare</emphasis> - <acronym>ZFS</acronym> has a special pseudo-vdev type for keeping track of available hot spares. Note that installed hot spares are not deployed automatically; they must manually be configured to replace the failed device using <command>zfs replace</command>. <emphasis>备援(Spare)</emphasis> - <acronym>ZFS</acronym> 有特殊的虚拟 vdev 型态可用来持续追踪可用的热备援设备(Hot spare)。注意,安装的热备援设备并不会自动布署,热备援设备需要手动使用 <command>zfs replace</command> 设定替换故障的设备。
<emphasis>Log</emphasis> - <acronym>ZFS</acronym> Log Devices, also known as <acronym>ZFS</acronym> Intent Log (<link linkend="zfs-term-zil"><acronym>ZIL</acronym></link>) move the intent log from the regular pool devices to a dedicated device, typically an <acronym>SSD</acronym>. Having a dedicated log device can significantly improve the performance of applications with a high volume of synchronous writes, especially databases. Log devices can be mirrored, but <acronym>RAID-Z</acronym> is not supported. If multiple log devices are used, writes will be load balanced across them. <emphasis>日志(Log)</emphasis> - <acronym>ZFS</acronym> 记录设备,也被称作<acronym>ZFS</acronym> 意图日志(ZFS Intent Log,<link linkend="zfs-term-zil"><acronym>ZIL</acronym></link>)会从正常的存储池设备移动意图日志到独立的设备上,通常是一个 <acronym>SSD</acronym>。有了独立的日志设备,可以明显的增进有大量同步写入应用程序的效能,特别是资料库。日志设备可以做成镜像,但不支持 <acronym>RAID-Z</acronym>,若使用多个日志设备,写入动作会被负载平衡分散到这些设备。
<emphasis>Cache</emphasis> - Adding a cache vdev to a pool will add the storage of the cache to the <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>. Cache devices cannot be mirrored. Since a cache device only stores additional copies of existing data, there is no risk of data loss. <emphasis>缓存(Cache)</emphasis> - 加入快取vdev 到存储池可以增加储存空间的<link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link> 快取。快取设备无法做镜像,因快取设备只会储存额外的现有资料的复本,并没有资料遗失的风险。
A pool is made up of one or more vdevs, which themselves can be a single disk or a group of disks, in the case of a <acronym>RAID</acronym> transform. When multiple vdevs are used, <acronym>ZFS</acronym> spreads data across the vdevs to increase performance and maximize usable space. <_:itemizedlist-1/> 存储池是由一个或多个 vdev 所组成,vdev 可以是一个磁盘或是 <acronym>RAID</acronym> Transform 的磁盘群组。当使用多个 vdev,<acronym>ZFS</acronym> 可以分散资料到各个 vdev 来增加效能与最大的可用空间。 <_:itemizedlist-1/>
Transaction Group (<acronym>TXG</acronym>) 交易群组(Transaction Group, <acronym>TXG</acronym>)
<emphasis>Open</emphasis> - When a new transaction group is created, it is in the open state, and accepts new writes. There is always a transaction group in the open state, however the transaction group may refuse new writes if it has reached a limit. Once the open transaction group has reached a limit, or the <link linkend="zfs-advanced-tuning-txg-timeout"><varname>vfs.zfs.txg.timeout</varname></link> has been reached, the transaction group advances to the next state. <emphasis>开放(Open)</emphasis> - 新的交易群组建立之后便处于开放的状态,可以接受新的写入动作。永远会有开放状态的交易群组,即始交易群组可能会因到达上限而拒绝新的写入动作。一但开放的交易群组到达上限或到达<link linkend="zfs-advanced-tuning-txg-timeout"><varname>vfs.zfs.txg.timeout</varname></link>,交易群组便会继续进入下一个状态。
<emphasis>Quiescing</emphasis> - A short state that allows any pending operations to finish while not blocking the creation of a new open transaction group. Once all of the transactions in the group have completed, the transaction group advances to the final state. <emphasis>静置中(Quiescing)</emphasis> - 一个短暂的状态,会等候任何未完成的操作完成,不会阻挡新开放的交易群组建立。一旦所有在群组中的交易完成,交易群组便会进入到最终状态。
<emphasis>Syncing</emphasis> - All of the data in the transaction group is written to stable storage. This process will in turn modify other data, such as metadata and space maps, that will also need to be written to stable storage. The process of syncing involves multiple passes. The first, all of the changed data blocks, is the biggest, followed by the metadata, which may take multiple passes to complete. Since allocating space for the data blocks generates new metadata, the syncing state cannot finish until a pass completes that does not allocate any additional space. The syncing state is also where <emphasis>synctasks</emphasis> are completed. Synctasks are administrative operations, such as creating or destroying snapshots and datasets, that modify the uberblock are completed. Once the sync state is complete, the transaction group in the quiescing state is advanced to the syncing state. <emphasis>同步中(Syncing)</emphasis> - 所有在交易群组中的资料会被写任到稳定的储存空间,这个程序会依序修改其他也需同样写入到稳定储存空间的资料,如Metadata 与空间对应表。同步的程多会牵涉多个循环,首先是同步所有更改的资料区块,也是最大的部份,接着是 Metadata,这可能会需要多个循环来完成。由于要配置空间供资料区块使用会产生新的 Metadata,同步中状态在到达循环完成而不再需要分配任何额外空间的状态前无法结束。同步中状态也是完成 <emphasis>synctask</emphasis> 的地方,Synctask 是指管理操作,如:建立或摧毁快照与数据集,会修改 uberblock,也会在此时完成。同步状态完成后,其他处于状态中状态的交易群组便会进入同步中状态。
Transaction Groups are the way changed blocks are grouped together and eventually written to the pool. Transaction groups are the atomic unit that <acronym>ZFS</acronym> uses to assert consistency. Each transaction group is assigned a unique 64-bit consecutive identifier. There can be up to three active transaction groups at a time, one in each of these three states: <_:itemizedlist-1/> All administrative functions, such as <link linkend="zfs-term-snapshot"><command>snapshot</command></link> are written as part of the transaction group. When a synctask is created, it is added to the currently open transaction group, and that group is advanced as quickly as possible to the syncing state to reduce the latency of administrative commands. 交易群组是一种将更动的资料区块包装成一组的方式,最后再一次写入到存储池。交易群组是 <acronym>ZFS</acronym> 用来检验一致性的基本单位。每个交易群组会被分配一个独一无二的 64-bit 连续代号。最多一次可以有三个活动中的交易群组,这三个交易群组的每一个都有这三种状态:<_:itemizedlist-1/> 所有管理功能如快照(<link linkend="zfs-term-snapshot"><command>Snapshot</command></link>)会作为交易群组的一部份写入。当 synctask 建立之后,便会加入到目前开放的交易群组中,然后该群组会尽快的进入同步中状态来减少管理指令的延迟。
Adaptive Replacement Cache (<acronym>ARC</acronym>) 自适应替换缓存(<acronym>ARC</acronym>)
<acronym>ZFS</acronym> uses an Adaptive Replacement Cache (<acronym>ARC</acronym>), rather than a more traditional Least Recently Used (<acronym>LRU</acronym>) cache. An <acronym>LRU</acronym> cache is a simple list of items in the cache, sorted by when each object was most recently used. New items are added to the top of the list. When the cache is full, items from the bottom of the list are evicted to make room for more active objects. An <acronym>ARC</acronym> consists of four lists; the Most Recently Used (<acronym>MRU</acronym>) and Most Frequently Used (<acronym>MFU</acronym>) objects, plus a ghost list for each. These ghost lists track recently evicted objects to prevent them from being added back to the cache. This increases the cache hit ratio by avoiding objects that have a history of only being used occasionally. Another advantage of using both an <acronym>MRU</acronym> and <acronym>MFU</acronym> is that scanning an entire file system would normally evict all data from an <acronym>MRU</acronym> or <acronym>LRU</acronym> cache in favor of this freshly accessed content. With <acronym>ZFS</acronym>, there is also an <acronym>MFU</acronym> that only tracks the most frequently used objects, and the cache of the most commonly accessed blocks remains. <acronym>ZFS</acronym> 使用了自适应替换快取(Adaptive Replacement Cache, <acronym>ARC</acronym>),而不是传统的最近最少使用(Least Recently Used, <acronym>LRU</acronym> )快取,LRU 快取在快取中是一个简单的项目清单,会依每个物件最近使用的时间来排序,新项会加入到清单的最上方,当快取额满了便会去除清单最下方的项目来空出空间给较常使用的物件。 <acronym>ARC</acronym> 结合了四种快取清单,最近最常使用(Most Recently Used, <acronym>MRU</acronym>)及最常使用(Most Frequently Used, <acronym>MFU</acronym >)物件加上两个清单各自的幽灵清单(Ghost list),这些幽灵清单会追踪最近被去除的物件来避免又被加回到快取,避免过去只有偶尔被使用的物件加入清单可以增加快取的命中率。同时使用<acronym>MRU</acronym> 及<acronym>MFU</acronym> 的另外一个优点是扫描一个完整文件系统可以去除在<acronym>MRU</acronym> 或<acronym>LRU</acronym> 快取中的所有资料,有利于这些才刚存取的内容。使用 <acronym>ZFS</acronym> 也有 <acronym>MFU</acronym> 可只追踪最常使用的物件并保留最常被存取的资料区块快取。
<acronym>L2ARC</acronym> <acronym>L2ARC</acronym>
<acronym>L2ARC</acronym> is the second level of the <acronym>ZFS</acronym> caching system. The primary <acronym>ARC</acronym> is stored in <acronym>RAM</acronym>. Since the amount of available <acronym>RAM</acronym> is often limited, <acronym>ZFS</acronym> can also use <link linkend="zfs-term-vdev-cache">cache vdevs</link>. Solid State Disks (<acronym>SSD</acronym>s) are often used as these cache devices due to their higher speed and lower latency compared to traditional spinning disks. <acronym>L2ARC</acronym> is entirely optional, but having one will significantly increase read speeds for files that are cached on the <acronym>SSD</acronym> instead of having to be read from the regular disks. <acronym>L2ARC</acronym> can also speed up <link linkend="zfs-term-deduplication">deduplication</link> because a <acronym>DDT</acronym> that does not fit in <acronym>RAM</acronym> but does fit in the <acronym>L2ARC</acronym> will be much faster than a <acronym>DDT</acronym> that must be read from disk. The rate at which data is added to the cache devices is limited to prevent prematurely wearing out <acronym>SSD</acronym>s with too many writes. Until the cache is full (the first block has been evicted to make room), writing to the <acronym>L2ARC</acronym> is limited to the sum of the write limit and the boost limit, and afterwards limited to the write limit. A pair of <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry> values control these rate limits. <link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link> controls how many bytes are written to the cache per second, while <link linkend="zfs-advanced-tuning-l2arc_write_boost"><varname>vfs.zfs.l2arc_write_boost</varname></link> adds to this limit during the <quote>Turbo Warmup Phase</quote> (Write Boost). <acronym>L2ARC</acronym> 是<acronym>ZFS</acronym>缓存系统的第二层,主要的<acronym>ARC</acronym> 会储存在<acronym>RAM</acronym> 当中,但因为<acronym>RAM</acronym> 可用的空间量通常有限,因此<acronym>ZFS</acronym> 也可以使用<link linkend="zfs-term-vdev-cache">cache vdevs</link>。固态磁盘(Solid State Disk, <acronym>SSD</acronym>)常被拿来此处作为快取设备,因为比起传统旋转碟片的磁盘,固态磁盘有较快的速度与较低的延迟。 <acronym>L2ARC</acronym> 是选用的,但使用可以明显增进那些已使用 <acronym>SSD</acronym> 快取的文件读取速度,无须从一般磁盘读取。 <acronym>L2ARC</acronym> 也同样可以加速去重复(<link linkend="zfs-term-deduplication">Deduplication</link>),因为<acronym>DDT</acronym> 并不适合放在<acronym> RAM</acronym>,但适合放在<acronym>L2ARC</acronym>,比起要从磁盘读取,可以加快不少速度。为了避免<acronym>SSD</acronym> 因写入次速过多而过早耗损,加入到快取设备的资料速率会被限制,直到快取用尽(去除第一个资料区块来腾出空间)之前,写入到<acronym>L2ARC</acronym> 的资料速率会限制在写入限制(Write limit)与加速限制(Boost limit)的总合,之后则会限制为写入限制,可以控制这两个速度限制的<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry> 数值分别为<link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link> 控制每秒有多少数位元组可写入到快取,而<link linkend="zfs-advanced-tuning-l2arc_write_boost"><varname>vfs.zfs.l2arc_write_boost</varname></link> 可在<quote>涡轮预热阶段</quote>(即写入加速)时增加写入限制。
<acronym>ZIL</acronym> <acronym>ZIL</acronym>
<acronym>ZIL</acronym> accelerates synchronous transactions by using storage devices like <acronym>SSD</acronym>s that are faster than those used in the main storage pool. When an application requests a synchronous write (a guarantee that the data has been safely stored to disk rather than merely cached to be written later), the data is written to the faster <acronym>ZIL</acronym> storage, then later flushed out to the regular disks. This greatly reduces latency and improves performance. Only synchronous workloads like databases will benefit from a <acronym>ZIL</acronym>. Regular asynchronous writes such as copying files will not use the <acronym>ZIL</acronym> at all. <acronym>ZIL</acronym> 會使用比主要儲存池還更快的儲存裝置來加速同步寫入動作(Synchronous transaction),如 <acronym>SSD</acronym>。當應用程序請求做一個同步的寫入時(保証資料會安全的儲存到磁盘,而不是先快取稍後再寫入),資料會先寫入到速度較快的 <acronym>ZIL</acronym> 儲存空間,之後再一併寫入到一般的磁盘。這可大量的減少延遲並增進效能。<acronym>ZIL</acronym> 只會有利於使用像資料庫這類的同步工作,一般非同步的寫入像複製檔案,則完全不會用到 <acronym>ZIL</acronym>。
Copy-On-Write 写入时复制(Copy-On-Write)
Unlike a traditional file system, when data is overwritten on <acronym>ZFS</acronym>, the new data is written to a different block rather than overwriting the old data in place. Only when this write is complete is the metadata then updated to point to the new location. In the event of a shorn write (a system crash or power loss in the middle of writing a file), the entire original contents of the file are still available and the incomplete write is discarded. This also means that <acronym>ZFS</acronym> does not require a <citerefentry><refentrytitle>fsck</refentrytitle><manvolnum>8</manvolnum></citerefentry> after an unexpected shutdown. 不像传统的文件系统,在<acronym>ZFS</acronym>,当资料要被覆写时,不会直接覆写旧资料所在的位置,而是将新资料会写入到另一个资料区块,只在资料写入完成后才会更新Metadata 指向新的位置。因此,在发生写入中断(在写入文件的过程中系统当机或电源中断)时,原来文件的完整内容并不会遗失,只会放弃未写入完成的新资料,这也意谓着<acronym>ZFS</acronym> 在发生预期之外的关机后不需要做<citerefentry><refentrytitle>fsck</refentrytitle><manvolnum>8</manvolnum></citerefentry>。
Dataset 数据集(Dataset)
<emphasis>Dataset</emphasis> is the generic term for a <acronym>ZFS</acronym> file system, volume, snapshot or clone. Each dataset has a unique name in the format <replaceable>poolname/path@snapshot</replaceable>. The root of the pool is technically a dataset as well. Child datasets are named hierarchically like directories. For example, <replaceable>mypool/home</replaceable>, the home dataset, is a child of <replaceable>mypool</replaceable> and inherits properties from it. This can be expanded further by creating <replaceable>mypool/home/user</replaceable>. This grandchild dataset will inherit properties from the parent and grandparent. Properties on a child can be set to override the defaults inherited from the parents and grandparents. Administration of datasets and their children can be <link linkend="zfs-zfs-allow">delegated</link>. <emphasis>数据集(Dataset)</emphasis> 是 <acronym>ZFS</acronym> 文件系统、磁盘区、快照或复本的通用术语。每个数据集都有独一无二的名称使用 <replaceable>poolname/path@snapshot</replaceable> 格式。存储池的根部技术上来说也算一个数据集,子数据集会采用像目录一样的层级来命名,例如<replaceable>mypool/home</replaceable>,home 数据集是<replaceable>mypool</replaceable> 的子数据集并且会继承其属性。这可以在往后继续扩展成 <replaceable>mypool/home/user</replaceable>,这个孙数据集会继承其父及祖父的属性。在子数据集的属性可以覆盖预设继承自父及祖父的属性。数据集及其子资料级的管理权限可以委托(<link linkend="zfs-zfs-allow">Delegate</link>)给他人。

Loading…

No matching activity found.

Browse all component changes

Glossary

English Chinese (Simplified) (zh_CN)
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: row/entry
Source string location
book.translate.xml:42901
String age
a year ago
Source string age
a year ago
Translation file
books/zh_CN/handbook.po, string 7025