Source string Read only

(itstool) path: row/entry (itstool) id: book.translate.xml#zfs-term-zil
22/220
Context English State
In a <acronym>RAID-Z3</acronym> configuration with eight disks of 1 TB, the volume will provide 5 TB of usable space and still be able to operate with three faulted disks. <trademark>Sun</trademark> recommends no more than nine disks in a single vdev. If the configuration has more disks, it is recommended to divide them into separate vdevs and the pool data will be striped across them.
A configuration of two <acronym>RAID-Z2</acronym> vdevs consisting of 8 disks each would create something similar to a <acronym>RAID-60</acronym> array. A <acronym>RAID-Z</acronym> group's storage capacity is approximately the size of the smallest disk multiplied by the number of non-parity disks. Four 1 TB disks in <acronym>RAID-Z1</acronym> has an effective size of approximately 3 TB, and an array of eight 1 TB disks in <acronym>RAID-Z3</acronym> will yield 5 TB of usable space.
<emphasis>Spare</emphasis> - <acronym>ZFS</acronym> has a special pseudo-vdev type for keeping track of available hot spares. Note that installed hot spares are not deployed automatically; they must manually be configured to replace the failed device using <command>zfs replace</command>.
<emphasis>Log</emphasis> - <acronym>ZFS</acronym> Log Devices, also known as <acronym>ZFS</acronym> Intent Log (<link linkend="zfs-term-zil"><acronym>ZIL</acronym></link>) move the intent log from the regular pool devices to a dedicated device, typically an <acronym>SSD</acronym>. Having a dedicated log device can significantly improve the performance of applications with a high volume of synchronous writes, especially databases. Log devices can be mirrored, but <acronym>RAID-Z</acronym> is not supported. If multiple log devices are used, writes will be load balanced across them.
<emphasis>Cache</emphasis> - Adding a cache vdev to a pool will add the storage of the cache to the <link linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>. Cache devices cannot be mirrored. Since a cache device only stores additional copies of existing data, there is no risk of data loss.
A pool is made up of one or more vdevs, which themselves can be a single disk or a group of disks, in the case of a <acronym>RAID</acronym> transform. When multiple vdevs are used, <acronym>ZFS</acronym> spreads data across the vdevs to increase performance and maximize usable space. <_:itemizedlist-1/>
Transaction Group (<acronym>TXG</acronym>)
<emphasis>Open</emphasis> - When a new transaction group is created, it is in the open state, and accepts new writes. There is always a transaction group in the open state, however the transaction group may refuse new writes if it has reached a limit. Once the open transaction group has reached a limit, or the <link linkend="zfs-advanced-tuning-txg-timeout"><varname>vfs.zfs.txg.timeout</varname></link> has been reached, the transaction group advances to the next state.
<emphasis>Quiescing</emphasis> - A short state that allows any pending operations to finish while not blocking the creation of a new open transaction group. Once all of the transactions in the group have completed, the transaction group advances to the final state.
<emphasis>Syncing</emphasis> - All of the data in the transaction group is written to stable storage. This process will in turn modify other data, such as metadata and space maps, that will also need to be written to stable storage. The process of syncing involves multiple passes. The first, all of the changed data blocks, is the biggest, followed by the metadata, which may take multiple passes to complete. Since allocating space for the data blocks generates new metadata, the syncing state cannot finish until a pass completes that does not allocate any additional space. The syncing state is also where <emphasis>synctasks</emphasis> are completed. Synctasks are administrative operations, such as creating or destroying snapshots and datasets, that modify the uberblock are completed. Once the sync state is complete, the transaction group in the quiescing state is advanced to the syncing state.
Transaction Groups are the way changed blocks are grouped together and eventually written to the pool. Transaction groups are the atomic unit that <acronym>ZFS</acronym> uses to assert consistency. Each transaction group is assigned a unique 64-bit consecutive identifier. There can be up to three active transaction groups at a time, one in each of these three states: <_:itemizedlist-1/> All administrative functions, such as <link linkend="zfs-term-snapshot"><command>snapshot</command></link> are written as part of the transaction group. When a synctask is created, it is added to the currently open transaction group, and that group is advanced as quickly as possible to the syncing state to reduce the latency of administrative commands.
Adaptive Replacement Cache (<acronym>ARC</acronym>)
<acronym>ZFS</acronym> uses an Adaptive Replacement Cache (<acronym>ARC</acronym>), rather than a more traditional Least Recently Used (<acronym>LRU</acronym>) cache. An <acronym>LRU</acronym> cache is a simple list of items in the cache, sorted by when each object was most recently used. New items are added to the top of the list. When the cache is full, items from the bottom of the list are evicted to make room for more active objects. An <acronym>ARC</acronym> consists of four lists; the Most Recently Used (<acronym>MRU</acronym>) and Most Frequently Used (<acronym>MFU</acronym>) objects, plus a ghost list for each. These ghost lists track recently evicted objects to prevent them from being added back to the cache. This increases the cache hit ratio by avoiding objects that have a history of only being used occasionally. Another advantage of using both an <acronym>MRU</acronym> and <acronym>MFU</acronym> is that scanning an entire file system would normally evict all data from an <acronym>MRU</acronym> or <acronym>LRU</acronym> cache in favor of this freshly accessed content. With <acronym>ZFS</acronym>, there is also an <acronym>MFU</acronym> that only tracks the most frequently used objects, and the cache of the most commonly accessed blocks remains.
<acronym>L2ARC</acronym>
<acronym>L2ARC</acronym> is the second level of the <acronym>ZFS</acronym> caching system. The primary <acronym>ARC</acronym> is stored in <acronym>RAM</acronym>. Since the amount of available <acronym>RAM</acronym> is often limited, <acronym>ZFS</acronym> can also use <link linkend="zfs-term-vdev-cache">cache vdevs</link>. Solid State Disks (<acronym>SSD</acronym>s) are often used as these cache devices due to their higher speed and lower latency compared to traditional spinning disks. <acronym>L2ARC</acronym> is entirely optional, but having one will significantly increase read speeds for files that are cached on the <acronym>SSD</acronym> instead of having to be read from the regular disks. <acronym>L2ARC</acronym> can also speed up <link linkend="zfs-term-deduplication">deduplication</link> because a <acronym>DDT</acronym> that does not fit in <acronym>RAM</acronym> but does fit in the <acronym>L2ARC</acronym> will be much faster than a <acronym>DDT</acronym> that must be read from disk. The rate at which data is added to the cache devices is limited to prevent prematurely wearing out <acronym>SSD</acronym>s with too many writes. Until the cache is full (the first block has been evicted to make room), writing to the <acronym>L2ARC</acronym> is limited to the sum of the write limit and the boost limit, and afterwards limited to the write limit. A pair of <citerefentry><refentrytitle>sysctl</refentrytitle><manvolnum>8</manvolnum></citerefentry> values control these rate limits. <link linkend="zfs-advanced-tuning-l2arc_write_max"><varname>vfs.zfs.l2arc_write_max</varname></link> controls how many bytes are written to the cache per second, while <link linkend="zfs-advanced-tuning-l2arc_write_boost"><varname>vfs.zfs.l2arc_write_boost</varname></link> adds to this limit during the <quote>Turbo Warmup Phase</quote> (Write Boost).
<acronym>ZIL</acronym>
<acronym>ZIL</acronym> accelerates synchronous transactions by using storage devices like <acronym>SSD</acronym>s that are faster than those used in the main storage pool. When an application requests a synchronous write (a guarantee that the data has been safely stored to disk rather than merely cached to be written later), the data is written to the faster <acronym>ZIL</acronym> storage, then later flushed out to the regular disks. This greatly reduces latency and improves performance. Only synchronous workloads like databases will benefit from a <acronym>ZIL</acronym>. Regular asynchronous writes such as copying files will not use the <acronym>ZIL</acronym> at all.
Copy-On-Write
Unlike a traditional file system, when data is overwritten on <acronym>ZFS</acronym>, the new data is written to a different block rather than overwriting the old data in place. Only when this write is complete is the metadata then updated to point to the new location. In the event of a shorn write (a system crash or power loss in the middle of writing a file), the entire original contents of the file are still available and the incomplete write is discarded. This also means that <acronym>ZFS</acronym> does not require a <citerefentry><refentrytitle>fsck</refentrytitle><manvolnum>8</manvolnum></citerefentry> after an unexpected shutdown.
Dataset
<emphasis>Dataset</emphasis> is the generic term for a <acronym>ZFS</acronym> file system, volume, snapshot or clone. Each dataset has a unique name in the format <replaceable>poolname/path@snapshot</replaceable>. The root of the pool is technically a dataset as well. Child datasets are named hierarchically like directories. For example, <replaceable>mypool/home</replaceable>, the home dataset, is a child of <replaceable>mypool</replaceable> and inherits properties from it. This can be expanded further by creating <replaceable>mypool/home/user</replaceable>. This grandchild dataset will inherit properties from the parent and grandparent. Properties on a child can be set to override the defaults inherited from the parents and grandparents. Administration of datasets and their children can be <link linkend="zfs-zfs-allow">delegated</link>.
File system
A <acronym>ZFS</acronym> dataset is most often used as a file system. Like most other file systems, a <acronym>ZFS</acronym> file system is mounted somewhere in the systems directory hierarchy and contains files and directories of its own with permissions, flags, and other metadata.
Volume
In additional to regular file system datasets, <acronym>ZFS</acronym> can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of <acronym>ZFS</acronym>, such as <acronym>UFS</acronym> virtualization, or exporting <acronym>iSCSI</acronym> extents.
Snapshot
The <link linkend="zfs-term-cow">copy-on-write</link> (<acronym>COW</acronym>) design of <acronym>ZFS</acronym> allows for nearly instantaneous, consistent snapshots with arbitrary names. After taking a snapshot of a dataset, or a recursive snapshot of a parent dataset that will include all child datasets, new data is written to new blocks, but the old blocks are not reclaimed as free space. The snapshot contains the original version of the file system, and the live file system contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data. The apparent size of the snapshot will grow as the blocks are no longer used in the live file system, but only in the snapshot. These snapshots can be mounted read only to allow for the recovery of previous versions of files. It is also possible to <link linkend="zfs-zfs-snapshot">rollback</link> a live file system to a specific snapshot, undoing any changes that took place after the snapshot was taken. Each block in the pool has a reference counter which keeps track of how many snapshots, clones, datasets, or volumes make use of that block. As files and snapshots are deleted, the reference count is decremented. When a block is no longer referenced, it is reclaimed as free space. Snapshots can also be marked with a <link linkend="zfs-zfs-snapshot">hold</link>. When a snapshot is held, any attempt to destroy it will return an <literal>EBUSY</literal> error. Each snapshot can have multiple holds, each with a unique name. The <link linkend="zfs-zfs-snapshot">release</link> command removes the hold so the snapshot can deleted. Snapshots can be taken on volumes, but they can only be cloned or rolled back, not mounted independently.
Clone
Snapshots can also be cloned. A clone is a writable version of a snapshot, allowing the file system to be forked as a new dataset. As with a snapshot, a clone initially consumes no additional space. As new data is written to a clone and new blocks are allocated, the apparent size of the clone grows. When blocks are overwritten in the cloned file system or volume, the reference count on the previous block is decremented. The snapshot upon which a clone is based cannot be deleted because the clone depends on it. The snapshot is the parent, and the clone is the child. Clones can be <emphasis>promoted</emphasis>, reversing this dependency and making the clone the parent and the previous parent the child. This operation requires no additional space. Because the amount of space used by the parent and child is reversed, existing quotas and reservations might be affected.
Checksum
<literal>fletcher2</literal>

Loading…

No matching activity found.

Browse all component changes

Things to check

Multiple failing checks

The translations in several languages have failing checks

Reset

Glossary

English English
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: row/entry (itstool) id: book.translate.xml#zfs-term-zil
Flags
read-only
Source string location
book.translate.xml:41281
String age
a year ago
Source string age
a year ago
Translation file
books/handbook.pot, string 6716