The translation is temporarily closed for contributions due to maintenance, please come back later.

Source string Read only

(itstool) path: sect1/title
Context English State
Memory
As a bare minimum, the total system memory should be at least one gigabyte. The amount of recommended <acronym>RAM</acronym> depends upon the size of the pool and which <acronym>ZFS</acronym> features are used. A general rule of thumb is 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, a general rule of thumb is 5 GB of RAM per TB of storage to be deduplicated. While some users successfully use <acronym>ZFS</acronym> with less <acronym>RAM</acronym>, systems under heavy load may panic due to memory exhaustion. Further tuning may be required for systems with less than the recommended RAM requirements.
Kernel Configuration
Due to the address space limitations of the <trademark>i386</trademark> platform, <acronym>ZFS</acronym> users on the <trademark>i386</trademark> architecture must add this option to a custom kernel configuration file, rebuild the kernel, and reboot:
options KVA_PAGES=512
This expands the kernel address space, allowing the <varname>vm.kvm_size</varname> tunable to be pushed beyond the currently imposed limit of 1 GB, or the limit of 2 GB for <acronym>PAE</acronym>. To find the most suitable value for this option, divide the desired address space in megabytes by four. In this example, it is <literal>512</literal> for 2 GB.
Loader Tunables
The <filename>kmem</filename> address space can be increased on all FreeBSD architectures. On a test system with 1 GB of physical memory, success was achieved with these options added to <filename>/boot/loader.conf</filename>, and the system restarted:
vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
For a more detailed list of recommendations for <acronym>ZFS</acronym>-related tuning, see <link xlink:href="https://wiki.freebsd.org/ZFSTuningGuide"/>.
Additional Resources
<link xlink:href="http://open-zfs.org">OpenZFS</link>
<link xlink:href="https://wiki.freebsd.org/ZFSTuningGuide">FreeBSD Wiki - <acronym>ZFS</acronym> Tuning</link>
<link xlink:href="http://docs.oracle.com/cd/E19253-01/819-5461/index.html">Oracle Solaris <acronym>ZFS</acronym> Administration Guide</link>
<link xlink:href="https://calomel.org/zfs_raid_speed_capacity.html">Calomel Blog - <acronym>ZFS</acronym> Raidz Performance, Capacity and Integrity</link>
<acronym>ZFS</acronym> Features and Terminology
<acronym>ZFS</acronym> is a fundamentally different file system because it is more than just a file system. <acronym>ZFS</acronym> combines the roles of file system and volume manager, enabling additional storage devices to be added to a live system and having the new space available on all of the existing file systems in that pool immediately. By combining the traditionally separate roles, <acronym>ZFS</acronym> is able to overcome previous limitations that prevented <acronym>RAID</acronym> groups being able to grow. Each top level device in a pool is called a <emphasis>vdev</emphasis>, which can be a simple disk or a <acronym>RAID</acronym> transformation such as a mirror or <acronym>RAID-Z</acronym> array. <acronym>ZFS</acronym> file systems (called <emphasis>datasets</emphasis>) each have access to the combined free space of the entire pool. As blocks are allocated from the pool, the space available to each file system decreases. This approach avoids the common pitfall with extensive partitioning where free space becomes fragmented across the partitions.
pool
A storage <emphasis>pool</emphasis> is the most basic building block of <acronym>ZFS</acronym>. A pool is made up of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes). These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a <acronym>GUID</acronym>. The features available are determined by the <acronym>ZFS</acronym> version number on the pool.
vdev Types
<emphasis>Disk</emphasis> - The most basic type of vdev is a standard block device. This can be an entire disk (such as <filename><replaceable>/dev/ada0</replaceable></filename> or <filename><replaceable>/dev/da0</replaceable></filename>) or a partition (<filename><replaceable>/dev/ada0p3</replaceable></filename>). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation.
Using an entire disk as part of a bootable pool is strongly discouraged, as this may render the pool unbootable. Likewise, you should not use an entire disk as part of a mirror or <acronym>RAID-Z</acronym> vdev. These are because it is impossible to reliably determine the size of an unpartitioned disk at boot time and because there's no place to put in boot code.
<emphasis>File</emphasis> - In addition to disks, <acronym>ZFS</acronym> pools can be backed by regular files, this is especially useful for testing and experimentation. Use the full path to the file as the device path in <command>zpool create</command>. All vdevs must be at least 128 MB in size.
<emphasis>Mirror</emphasis> - When creating a mirror, specify the <literal>mirror</literal> keyword followed by the list of member devices for the mirror. A mirror consists of two or more devices, all data will be written to all member devices. A mirror vdev will only hold as much data as its smallest member. A mirror vdev can withstand the failure of all but one of its members without losing any data.
A regular single disk vdev can be upgraded to a mirror vdev at any time with <command>zpool <link linkend="zfs-zpool-attach">attach</link></command>.
<emphasis><acronym>RAID-Z</acronym></emphasis> - <acronym>ZFS</acronym> implements <acronym>RAID-Z</acronym>, a variation on standard <acronym>RAID-5</acronym> that offers better distribution of parity and eliminates the <quote><acronym>RAID-5</acronym> write hole</quote> in which the data and parity information become inconsistent after an unexpected restart. <acronym>ZFS</acronym> supports three levels of <acronym>RAID-Z</acronym> which provide varying levels of redundancy in exchange for decreasing levels of usable storage. The types are named <acronym>RAID-Z1</acronym> through <acronym>RAID-Z3</acronym> based on the number of parity devices in the array and the number of disks which can fail while the pool remains operational.
In a <acronym>RAID-Z1</acronym> configuration with four disks, each 1 TB, usable storage is 3 TB and the pool will still be able to operate in degraded mode with one faulted disk. If an additional disk goes offline before the faulted disk is replaced and resilvered, all data in the pool can be lost.
In a <acronym>RAID-Z3</acronym> configuration with eight disks of 1 TB, the volume will provide 5 TB of usable space and still be able to operate with three faulted disks. <trademark>Sun</trademark> recommends no more than nine disks in a single vdev. If the configuration has more disks, it is recommended to divide them into separate vdevs and the pool data will be striped across them.
A configuration of two <acronym>RAID-Z2</acronym> vdevs consisting of 8 disks each would create something similar to a <acronym>RAID-60</acronym> array. A <acronym>RAID-Z</acronym> group's storage capacity is approximately the size of the smallest disk multiplied by the number of non-parity disks. Four 1 TB disks in <acronym>RAID-Z1</acronym> has an effective size of approximately 3 TB, and an array of eight 1 TB disks in <acronym>RAID-Z3</acronym> will yield 5 TB of usable space.
<emphasis>Spare</emphasis> - <acronym>ZFS</acronym> has a special pseudo-vdev type for keeping track of available hot spares. Note that installed hot spares are not deployed automatically; they must manually be configured to replace the failed device using <command>zfs replace</command>.
<emphasis>Log</emphasis> - <acronym>ZFS</acronym> Log Devices, also known as <acronym>ZFS</acronym> Intent Log (<link linkend="zfs-term-zil"><acronym>ZIL</acronym></link>) move the intent log from the regular pool devices to a dedicated device, typically an <acronym>SSD</acronym>. Having a dedicated log device can significantly improve the performance of applications with a high volume of synchronous writes, especially databases. Log devices can be mirrored, but <acronym>RAID-Z</acronym> is not supported. If multiple log devices are used, writes will be load balanced across them.

Loading…

No matching activity found.

Browse all component changes

Source information

Source string comment
(itstool) path: sect1/title
Flags
read-only
Source string location
book.translate.xml:42858
String age
a year ago
Source string age
a year ago
Translation file
books/handbook.pot, string 7008