The translation is temporarily closed for contributions due to maintenance, please come back later.


(itstool) path: row/entry (itstool) id: book.translate.xml#zfs-term-l2arc
Context English Chinese (Simplified) (zh_CN) State
<emphasis><acronym>RAID-Z</acronym></emphasis> - <acronym>ZFS</acronym> implements <acronym>RAID-Z</acronym>, a variation on standard <acronym>RAID-5</acronym> that offers better distribution of parity and eliminates the <quote><acronym>RAID-5</acronym> write hole</quote> in which the data and parity information become inconsistent after an unexpected restart. <acronym>ZFS</acronym> supports three levels of <acronym>RAID-Z</acronym> which provide varying levels of redundancy in exchange for decreasing levels of usable storage. The types are named <acronym>RAID-Z1</acronym> through <acronym>RAID-Z3</acronym> based on the number of parity devices in the array and the number of disks which can fail while the pool remains operational. <emphasis><acronym>RAID-Z</acronym></emphasis> - <acronym>ZFS</acronym> 实作了<acronym>RAID-Z</acronym>,以标准的<acronym>RAID-5</acronym> 修改而来,可提供奇偶校验(Parity)更佳的分散性并去除了<quote><acronym>RAID-5</acronym> write hole</quote> 导致在预期之外的重启后资料与奇偶校验资讯不一致的问题。<acronym>ZFS</acronym> 支持三个层级的<acronym>RAID-Z</acronym>,可提供不同程度的备援来换取减少不同程度的可用空间,类型的名称以阵列中奇偶校验设备的数量与存储池可以容许磁盘故障的数量来命名,从<acronym>RAID-Z1</acronym> 到<acronym>RAID-Z3</acronym>。
In a <acronym>RAID-Z1</acronym> configuration with four disks, each 1 TB, usable storage is 3 TB and the pool will still be able to operate in degraded mode with one faulted disk. If an additional disk goes offline before the faulted disk is replaced and resilvered, all data in the pool can be lost. 在<acronym>RAID-Z1</acronym> 配置4 个磁盘,每个磁盘1 TB,可用的储存空间则为3 TB,且若其中一个磁盘故障仍可以降级(Degraded)的模式运作,若在故障磁盘尚未更换并修复(Resilver)之前又有磁盘故障,所有在存储池中的资料便会遗失。
In a <acronym>RAID-Z3</acronym> configuration with eight disks of 1 TB, the volume will provide 5 TB of usable space and still be able to operate with three faulted disks. <trademark>Sun</trademark> recommends no more than nine disks in a single vdev. If the configuration has more disks, it is recommended to divide them into separate vdevs and the pool data will be striped across them. 在 <acronym>RAID-Z3</acronym> 配置 8 个 1 TB 的磁盘,磁盘区将会可以提供 5 TB 的可用空间且在 3 个磁盘故障的情况下仍可运作. <trademark>Sun</trademark> 建议单一个 vdev 不要使用超过 9 个磁盘.若配置需要使用更多磁盘,建议分成两个 vdev,这样存储池的资料便会分散到这两个 vdev。
A configuration of two <acronym>RAID-Z2</acronym> vdevs consisting of 8 disks each would create something similar to a <acronym>RAID-60</acronym> array. A <acronym>RAID-Z</acronym> group's storage capacity is approximately the size of the smallest disk multiplied by the number of non-parity disks. Four 1 TB disks in <acronym>RAID-Z1</acronym> has an effective size of approximately 3 TB, and an array of eight 1 TB disks in <acronym>RAID-Z3</acronym> will yield 5 TB of usable space. 两个 <acronym>RAID-Z2</acronym> vdev 的配置,每个 vdev 由 8 个硬盘组成,将创建类似于 <acronym>RAID-60</acronym> 的阵列。一个 <acronym>RAID-Z</acronym> 的存储容量大约是最小硬盘的大小乘以非同位硬盘的数量。在<acronym>RAID-Z1</acronym>中,4个1TB的硬盘的有效容量大约为3TB,而在<acronym>RAID-Z3</acronym>中,8个1TB的硬盘组成的阵列可用空间为 5TB。
<emphasis>Spare</emphasis> - <acronym>ZFS</acronym> has a special pseudo-vdev type for keeping track of available hot spares. Note that installed hot spares are not deployed automatically; they must manually be configured to replace the failed device using <command>zfs replace</command>. <emphasis>备援(Spare)</emphasis> - <acronym>ZFS</acronym> 有特殊的虚拟 vdev 型态可用来持续追踪可用的热备援设备(Hot spare)。注意,安装的热备援设备并不会自动布署,热备援设备需要手动使用 <command>zfs replace</command> 设定替换故障的设备。
<emphasis>Log</emphasis> - <acronym>ZFS</acronym> Log Devices, also known as <acronym>ZFS</acronym> Intent Log (<link linkend="zfs-term-zil"><acronym>ZIL</acronym></link>) move the intent log from the regular pool devices to a dedicated device, typically an <acronym>SSD</acronym>. Having a dedicated log device can significantly improve the performance of applications with a high volume of synchronous writes, especially databases. Log devices can be mirrored, but <acronym>RAID-Z</acronym> is not supported. If multiple log devices are used, writes will be load balanced across them. <emphasis>日志(Log)</emphasis> - <acronym>ZFS</acronym> 记录设备,也被称作<acronym>ZFS</acronym> 意图日志(ZFS Intent Log,<link linkend="zfs-term-zil"><acronym>ZIL</acronym></link>)会从正常的存储池设备移动意图日志到独立的设备上,通常是一个 <acronym>SSD</acronym>。有了独立的日志设备,可以明显的增进有大量同步写入应用程序的效能,特别是资料库。日志设备可以做成镜像,但不支持 <acronym>RAID-Z</acronym>,若使用多个日志设备,写入动作会被负载平衡分散到这些设备。
<emphasis>Cache</emphasis> - Adding a cache vdev to a pool will add the storage of the cache to the <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>. Cache devices cannot be mirrored. Since a cache device only stores additional copies of existing data, there is no risk of data loss. <emphasis>缓存(Cache)</emphasis> - 加入快取vdev 到存储池可以增加储存空间的<link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link> 快取。快取设备无法做镜像,因快取设备只会储存额外的现有资料的复本,并没有资料遗失的风险。
A pool is made up of one or more vdevs, which themselves can be a single disk or a group of disks, in the case of a <acronym>RAID</acronym> transform. When multiple vdevs are used, <acronym>ZFS</acronym> spreads data across the vdevs to increase performance and maximize usable space. <_:itemizedlist-1/> 存储池是由一个或多个 vdev 所组成,vdev 可以是一个磁盘或是 <acronym>RAID</acronym> Transform 的磁盘群组。当使用多个 vdev,<acronym>ZFS</acronym> 可以分散资料到各个 vdev 来增加效能与最大的可用空间。 <_:itemizedlist-1/>
Transaction Group (<acronym>TXG</acronym>) 交易群组(Transaction Group, <acronym>TXG</acronym>)
<emphasis>Open</emphasis> - When a new transaction group is created, it is in the open state, and accepts new writes. There is always a transaction group in the open state, however the transaction group may refuse new writes if it has reached a limit. Once the open transaction group has reached a limit, or the <link linkend="zfs-advanced-tuning-txg-timeout"><varname>vfs.zfs.txg.timeout</varname></link> has been reached, the transaction group advances to the next state. <emphasis>开放(Open)</emphasis> - 新的交易群组建立之后便处于开放的状态,可以接受新的写入动作。永远会有开放状态的交易群组,即始交易群组可能会因到达上限而拒绝新的写入动作。一但开放的交易群组到达上限或到达<link linkend="zfs-advanced-tuning-txg-timeout"><varname>vfs.zfs.txg.timeout</varname></link>,交易群组便会继续进入下一个状态。
<emphasis>Quiescing</emphasis> - A short state that allows any pending operations to finish while not blocking the creation of a new open transaction group. Once all of the transactions in the group have completed, the transaction group advances to the final state. <emphasis>静置中(Quiescing)</emphasis> - 一个短暂的状态,会等候任何未完成的操作完成,不会阻挡新开放的交易群组建立。一旦所有在群组中的交易完成,交易群组便会进入到最终状态。
<emphasis>Syncing</emphasis> - All of the data in the transaction group is written to stable storage. This process will in turn modify other data, such as metadata and space maps, that will also need to be written to stable storage. The process of syncing involves multiple passes. The first, all of the changed data blocks, is the biggest, followed by the metadata, which may take multiple passes to complete. Since allocating space for the data blocks generates new metadata, the syncing state cannot finish until a pass completes that does not allocate any additional space. The syncing state is also where <emphasis>synctasks</emphasis> are completed. Synctasks are administrative operations, such as creating or destroying snapshots and datasets, that modify the uberblock are completed. Once the sync state is complete, the transaction group in the quiescing state is advanced to the syncing state. <emphasis>同步中(Syncing)</emphasis> - 所有在交易群组中的资料会被写任到稳定的储存空间,这个程序会依序修改其他也需同样写入到稳定储存空间的资料,如Metadata 与空间对应表。同步的程多会牵涉多个循环,首先是同步所有更改的资料区块,也是最大的部份,接着是 Metadata,这可能会需要多个循环来完成。由于要配置空间供资料区块使用会产生新的 Metadata,同步中状态在到达循环完成而不再需要分配任何额外空间的状态前无法结束。同步中状态也是完成 <emphasis>synctask</emphasis> 的地方,Synctask 是指管理操作,如:建立或摧毁快照与数据集,会修改 uberblock,也会在此时完成。同步状态完成后,其他处于状态中状态的交易群组便会进入同步中状态。
Transaction Groups are the way changed blocks are grouped together and eventually written to the pool. Transaction groups are the atomic unit that <acronym>ZFS</acronym> uses to assert consistency. Each transaction group is assigned a unique 64-bit consecutive identifier. There can be up to three active transaction groups at a time, one in each of these three states: <_:itemizedlist-1/> All administrative functions, such as <link linkend="zfs-term-snapshot"><command>snapshot</command></link> are written as part of the transaction group. When a synctask is created, it is added to the currently open transaction group, and that group is advanced as quickly as possible to the syncing state to reduce the latency of administrative commands. 交易群组是一种将更动的资料区块包装成一组的方式,最后再一次写入到存储池。交易群组是 <acronym>ZFS</acronym> 用来检验一致性的基本单位。每个交易群组会被分配一个独一无二的 64-bit 连续代号。最多一次可以有三个活动中的交易群组,这三个交易群组的每一个都有这三种状态:<_:itemizedlist-1/> 所有管理功能如快照(<link linkend="zfs-term-snapshot"><command>Snapshot</command></link>)会作为交易群组的一部份写入。当 synctask 建立之后,便会加入到目前开放的交易群组中,然后该群组会尽快的进入同步中状态来减少管理指令的延迟。
Adaptive Replacement Cache (<acronym>ARC</acronym>) 自适应替换缓存(<acronym>ARC</acronym>)
<acronym>ZFS</acronym> uses an Adaptive Replacement Cache (<acronym>ARC</acronym>), rather than a more traditional Least Recently Used (<acronym>LRU</acronym>) cache. An <acronym>LRU</acronym> cache is a simple list of items in the cache, sorted by when each object was most recently used. New items are added to the top of the list. When the cache is full, items from the bottom of the list are evicted to make room for more active objects. An <acronym>ARC</acronym> consists of four lists; the Most Recently Used (<acronym>MRU</acronym>) and Most Frequently Used (<acronym>MFU</acronym>) objects, plus a ghost list for each. These ghost lists track recently evicted objects to prevent them from being added back to the cache. This increases the cache hit ratio by avoiding objects that have a history of only being used occasionally. Another advantage of using both an <acronym>MRU</acronym> and <acronym>MFU</acronym> is that scanning an entire file system would normally evict all data from an <acronym>MRU</acronym> or <acronym>LRU</acronym> cache in favor of this freshly accessed content. With <acronym>ZFS</acronym>, there is also an <acronym>MFU</acronym> that only tracks the most frequently used objects, and the cache of the most commonly accessed blocks remains. <acronym>ZFS</acronym> 使用了自适应替换快取(Adaptive Replacement Cache, <acronym>ARC</acronym>),而不是传统的最近最少使用(Least Recently Used, <acronym>LRU</acronym> )快取,LRU 快取在快取中是一个简单的项目清单,会依每个物件最近使用的时间来排序,新项会加入到清单的最上方,当快取额满了便会去除清单最下方的项目来空出空间给较常使用的物件。 <acronym>ARC</acronym> 结合了四种快取清单,最近最常使用(Most Recently Used, <acronym>MRU</acronym>)及最常使用(Most Frequently Used, <acronym>MFU</acronym >)物件加上两个清单各自的幽灵清单(Ghost list),这些幽灵清单会追踪最近被去除的物件来避免又被加回到快取,避免过去只有偶尔被使用的物件加入清单可以增加快取的命中率。同时使用<acronym>MRU</acronym> 及<acronym>MFU</acronym> 的另外一个优点是扫描一个完整文件系统可以去除在<acronym>MRU</acronym> 或<acronym>LRU</acronym> 快取中的所有资料,有利于这些才刚存取的内容。使用 <acronym>ZFS</acronym> 也有 <acronym>MFU</acronym> 可只追踪最常使用的物件并保留最常被存取的资料区块快取。
<acronym>L2ARC</acronym> <acronym>L2ARC</acronym>
<acronym>L2ARC</acronym> is the second level of the <acronym>ZFS</acronym> caching system. The primary <acronym>ARC</acronym> is stored in <acronym>RAM</acronym>. Since the amount of available <acronym>RAM</acronym> is often limited, <acronym>ZFS</acronym> can also use <link linkend="zfs-term-vdev-cache">cache vdevs</link>. Solid State Disks (<acronym>SSD</acronym>s) are often used as these cache devices due to their higher speed and lower latency compared to traditional spinning disks. <acronym>L2ARC</acronym> is entirely optional, but having one will significantly increase read speeds for files that are cached on the <acronym>SSD</acronym> instead of having to be read from the regular disks. <acronym>L2ARC</acronym> can also speed up <link linkend="zfs-term-deduplication">deduplication</link> because a <acronym>DDT</acronym> that does not fit in <acronym>RAM</acronym> but does fit in the <acronym>L2ARC</acronym> will be much faster than a <acronym>DDT</acronym> that must be read from disk. The rate at which data is added to the cache devices is limited to prevent prematurely wearing out <acronym>SSD</acronym>s with too many writes. Until the cache is full (the first block has been evicted to make room), writing to the <acronym>L2ARC</acronym> is limited to the sum of the write limit and the boost limit, and afterwards limited to the write limit. A pair of <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry> values control these rate limits. <link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link> controls how many bytes are written to the cache per second, while <link linkend="zfs-advanced-tuning-l2arc_write_boost"><varname>vfs.zfs.l2arc_write_boost</varname></link> adds to this limit during the <quote>Turbo Warmup Phase</quote> (Write Boost). <acronym>L2ARC</acronym> 是<acronym>ZFS</acronym>缓存系统的第二层,主要的<acronym>ARC</acronym> 会储存在<acronym>RAM</acronym> 当中,但因为<acronym>RAM</acronym> 可用的空间量通常有限,因此<acronym>ZFS</acronym> 也可以使用<link linkend="zfs-term-vdev-cache">cache vdevs</link>。固态磁盘(Solid State Disk, <acronym>SSD</acronym>)常被拿来此处作为快取设备,因为比起传统旋转碟片的磁盘,固态磁盘有较快的速度与较低的延迟。 <acronym>L2ARC</acronym> 是选用的,但使用可以明显增进那些已使用 <acronym>SSD</acronym> 快取的文件读取速度,无须从一般磁盘读取。 <acronym>L2ARC</acronym> 也同样可以加速去重复(<link linkend="zfs-term-deduplication">Deduplication</link>),因为<acronym>DDT</acronym> 并不适合放在<acronym> RAM</acronym>,但适合放在<acronym>L2ARC</acronym>,比起要从磁盘读取,可以加快不少速度。为了避免<acronym>SSD</acronym> 因写入次速过多而过早耗损,加入到快取设备的资料速率会被限制,直到快取用尽(去除第一个资料区块来腾出空间)之前,写入到<acronym>L2ARC</acronym> 的资料速率会限制在写入限制(Write limit)与加速限制(Boost limit)的总合,之后则会限制为写入限制,可以控制这两个速度限制的<citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry> 数值分别为<link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link> 控制每秒有多少数位元组可写入到快取,而<link linkend="zfs-advanced-tuning-l2arc_write_boost"><varname>vfs.zfs.l2arc_write_boost</varname></link> 可在<quote>涡轮预热阶段</quote>(即写入加速)时增加写入限制。
<acronym>ZIL</acronym> <acronym>ZIL</acronym>
<acronym>ZIL</acronym> accelerates synchronous transactions by using storage devices like <acronym>SSD</acronym>s that are faster than those used in the main storage pool. When an application requests a synchronous write (a guarantee that the data has been safely stored to disk rather than merely cached to be written later), the data is written to the faster <acronym>ZIL</acronym> storage, then later flushed out to the regular disks. This greatly reduces latency and improves performance. Only synchronous workloads like databases will benefit from a <acronym>ZIL</acronym>. Regular asynchronous writes such as copying files will not use the <acronym>ZIL</acronym> at all. <acronym>ZIL</acronym> 會使用比主要儲存池還更快的儲存裝置來加速同步寫入動作(Synchronous transaction),如 <acronym>SSD</acronym>。當應用程序請求做一個同步的寫入時(保証資料會安全的儲存到磁盘,而不是先快取稍後再寫入),資料會先寫入到速度較快的 <acronym>ZIL</acronym> 儲存空間,之後再一併寫入到一般的磁盘。這可大量的減少延遲並增進效能。<acronym>ZIL</acronym> 只會有利於使用像資料庫這類的同步工作,一般非同步的寫入像複製檔案,則完全不會用到 <acronym>ZIL</acronym>。
Copy-On-Write 写入时复制(Copy-On-Write)
Unlike a traditional file system, when data is overwritten on <acronym>ZFS</acronym>, the new data is written to a different block rather than overwriting the old data in place. Only when this write is complete is the metadata then updated to point to the new location. In the event of a shorn write (a system crash or power loss in the middle of writing a file), the entire original contents of the file are still available and the incomplete write is discarded. This also means that <acronym>ZFS</acronym> does not require a <citerefentry><refentrytitle>fsck</refentrytitle><manvolnum>8</manvolnum></citerefentry> after an unexpected shutdown. 不像传统的文件系统,在<acronym>ZFS</acronym>,当资料要被覆写时,不会直接覆写旧资料所在的位置,而是将新资料会写入到另一个资料区块,只在资料写入完成后才会更新Metadata 指向新的位置。因此,在发生写入中断(在写入文件的过程中系统当机或电源中断)时,原来文件的完整内容并不会遗失,只会放弃未写入完成的新资料,这也意谓着<acronym>ZFS</acronym> 在发生预期之外的关机后不需要做<citerefentry><refentrytitle>fsck</refentrytitle><manvolnum>8</manvolnum></citerefentry>。
Dataset 数据集(Dataset)
<emphasis>Dataset</emphasis> is the generic term for a <acronym>ZFS</acronym> file system, volume, snapshot or clone. Each dataset has a unique name in the format <replaceable>poolname/path@snapshot</replaceable>. The root of the pool is technically a dataset as well. Child datasets are named hierarchically like directories. For example, <replaceable>mypool/home</replaceable>, the home dataset, is a child of <replaceable>mypool</replaceable> and inherits properties from it. This can be expanded further by creating <replaceable>mypool/home/user</replaceable>. This grandchild dataset will inherit properties from the parent and grandparent. Properties on a child can be set to override the defaults inherited from the parents and grandparents. Administration of datasets and their children can be <link linkend="zfs-zfs-allow">delegated</link>. <emphasis>数据集(Dataset)</emphasis> 是 <acronym>ZFS</acronym> 文件系统、磁盘区、快照或复本的通用术语。每个数据集都有独一无二的名称使用 <replaceable>poolname/path@snapshot</replaceable> 格式。存储池的根部技术上来说也算一个数据集,子数据集会采用像目录一样的层级来命名,例如<replaceable>mypool/home</replaceable>,home 数据集是<replaceable>mypool</replaceable> 的子数据集并且会继承其属性。这可以在往后继续扩展成 <replaceable>mypool/home/user</replaceable>,这个孙数据集会继承其父及祖父的属性。在子数据集的属性可以覆盖预设继承自父及祖父的属性。数据集及其子资料级的管理权限可以委托(<link linkend="zfs-zfs-allow">Delegate</link>)给他人。
File system 文件系统(File system)
A <acronym>ZFS</acronym> dataset is most often used as a file system. Like most other file systems, a <acronym>ZFS</acronym> file system is mounted somewhere in the systems directory hierarchy and contains files and directories of its own with permissions, flags, and other metadata. <acronym>ZFS</acronym> 数据集最常被当做文件系统使用。如同大多数其他的文件系统,<acronym>ZFS</acronym> 文件系统会被挂载在系统目录层级的某一处且内含各自拥有权限、旗标及 Metadata 的文件与目录。
Volume 磁盘分区(Volume)
In additional to regular file system datasets, <acronym>ZFS</acronym> can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of <acronym>ZFS</acronym>, such as <acronym>UFS</acronym> virtualization, or exporting <acronym>iSCSI</acronym> extents. 除了一般的文件系统数据集之外,<acronym>ZFS</acronym> 也可以建立磁盘区(Volume),磁盘区是资料区块设备。磁盘区有许多与数据集相似的功能,包含复制时写入、快照、复本以及资料校验。要在<acronym>ZFS</acronym> 的顶层执行其他文件系统格式时使用磁盘区非常有用,例如<acronym>UFS</acronym> 虚拟化或汇出<acronym>iSCSI</acronym> 延伸磁区(Extent)。
Snapshot 快照(Snapshot)
The <link linkend="zfs-term-cow">copy-on-write</link> (<acronym>COW</acronym>) design of <acronym>ZFS</acronym> allows for nearly instantaneous, consistent snapshots with arbitrary names. After taking a snapshot of a dataset, or a recursive snapshot of a parent dataset that will include all child datasets, new data is written to new blocks, but the old blocks are not reclaimed as free space. The snapshot contains the original version of the file system, and the live file system contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data. The apparent size of the snapshot will grow as the blocks are no longer used in the live file system, but only in the snapshot. These snapshots can be mounted read only to allow for the recovery of previous versions of files. It is also possible to <link linkend="zfs-zfs-snapshot">rollback</link> a live file system to a specific snapshot, undoing any changes that took place after the snapshot was taken. Each block in the pool has a reference counter which keeps track of how many snapshots, clones, datasets, or volumes make use of that block. As files and snapshots are deleted, the reference count is decremented. When a block is no longer referenced, it is reclaimed as free space. Snapshots can also be marked with a <link linkend="zfs-zfs-snapshot">hold</link>. When a snapshot is held, any attempt to destroy it will return an <literal>EBUSY</literal> error. Each snapshot can have multiple holds, each with a unique name. The <link linkend="zfs-zfs-snapshot">release</link> command removes the hold so the snapshot can deleted. Snapshots can be taken on volumes, but they can only be cloned or rolled back, not mounted independently. <acronym>ZFS</acronym> 的写入时复制(<link linkend="zfs-term-cow">Copy-On-Write</link>, <acronym>COW</acronym>)设计可以使用任意的名称做到几乎即时、一致的快照。在制做数据集的快照或父数据集递回快照(会包含其所有子数据集)之后,新的资料会写入到资的资料区块,但不会回收旧的资料区块为可用空间,快照中会使用原版本的文件系统,而快照之后所做的变更则会储存在目前的文件系统,因此不会重复使用额外的空间。当新的资料写入到目前的文件系统,便会配置新的资料区块来储存这些资料。快照表面大小(Apparent size)会随着在目前文件系统停止使用的资料区块而成长,但仅限于快照。可以用只读的方式挂载这些快照来复原先前版本的文件,也可以还原(<link linkend="zfs-zfs-snapshot">Rollback</link>)目前的文件系统到指定的快照,来还原任何在快照之后所做的变更。每个在存储池中的资料区块都会有一个参考记数器,可以用来持续追踪有多少快照、复本、数据集或是磁盘区使用这个资料区块,当删除文件与快照参照的计数变会灭少,直到没有任何东西参考这个资料区块才会被回收为可用空间。快照也可使用<link linkend="zfs-zfs-snapshot">hold</link> 来标记,档标记为hold 时,任何尝试要删除该快照的动作便会回传<literal>EBUSY</literal>的错误,每个快照可以标记多个不同唯一名称的hold,而<link linkend="zfs-zfs-snapshot">release</link> 指令则可以移除hold,这样才可删除快照。在磁盘区上快可以制作快照,但只能用来复制或还原,无法独立挂载。
Clone 复本(Clone)
Snapshots can also be cloned. A clone is a writable version of a snapshot, allowing the file system to be forked as a new dataset. As with a snapshot, a clone initially consumes no additional space. As new data is written to a clone and new blocks are allocated, the apparent size of the clone grows. When blocks are overwritten in the cloned file system or volume, the reference count on the previous block is decremented. The snapshot upon which a clone is based cannot be deleted because the clone depends on it. The snapshot is the parent, and the clone is the child. Clones can be <emphasis>promoted</emphasis>, reversing this dependency and making the clone the parent and the previous parent the child. This operation requires no additional space. Because the amount of space used by the parent and child is reversed, existing quotas and reservations might be affected. 快照也可以做复本,复本是可写入版本的快照,让文件系统可分支成为新的数据集。如同快照,复本一开始不会消耗任何额外空间,随着新资料写入到复本会配置新的资料区块,复本的表面大小(Apparent size)才会成长,当在复本文件系统或磁盘区的资料区块被覆写时,在先前资料区块的参考计数则会减少。建立复本所使用的快照无法被删除,因为复本会相依该快照,快照为父,复本为子。复本可以被提升(<emphasis>promoted</emphasis>)、反转相依关系,来让复本成为父,之前的父变为子,这个操作不需要额外的空间。由于反转了父与子使用的空间量,所以可能会影响既有的配额(Quota)与保留空间(Reservation)。


No matching activity found.

Browse all component changes


English Chinese (Simplified) (zh_CN)
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: row/entry (itstool) id: book.translate.xml#zfs-term-l2arc
Source string location
String age
a year ago
Source string age
a year ago
Translation file
books/zh_CN/handbook.po, string 7033