Source string Read only

(itstool) path: sect1/para
629/6290
Context English State
UFS Journaling Through <acronym>GEOM</acronym>
<primary>Journaling</primary>
Support for journals on <acronym>UFS</acronym> file systems is available on FreeBSD. The implementation is provided through the <acronym>GEOM</acronym> subsystem and is configured using <command>gjournal</command>. Unlike other file system journaling implementations, the <command>gjournal</command> method is block based and not implemented as part of the file system. It is a <acronym>GEOM</acronym> extension.
Journaling stores a log of file system transactions, such as changes that make up a complete disk write operation, before meta-data and file writes are committed to the disk. This transaction log can later be replayed to redo file system transactions, preventing file system inconsistencies.
This method provides another mechanism to protect against data loss and inconsistencies of the file system. Unlike Soft Updates, which tracks and enforces meta-data updates, and snapshots, which create an image of the file system, a log is stored in disk space specifically for this task. For better performance, the journal may be stored on another disk. In this configuration, the journal provider or storage device should be listed after the device to enable journaling on.
The <filename>GENERIC</filename> kernel provides support for <command>gjournal</command>. To automatically load the <filename>geom_journal.ko</filename> kernel module at boot time, add the following line to <filename>/boot/loader.conf</filename>:
geom_journal_load="YES"
If a custom kernel is used, ensure the following line is in the kernel configuration file:
options GEOM_JOURNAL
Once the module is loaded, a journal can be created on a new file system using the following steps. In this example, <filename>da4</filename> is a new <acronym>SCSI</acronym> disk:
<prompt>#</prompt> <userinput>gjournal load</userinput>
<prompt>#</prompt> <userinput>gjournal label /dev/<replaceable>da4</replaceable></userinput>
This will load the module and create a <filename>/dev/da4.journal</filename> device node on <filename>/dev/da4</filename>.
A <acronym>UFS</acronym> file system may now be created on the journaled device, then mounted on an existing mount point:
<prompt>#</prompt> <userinput>newfs -O 2 -J /dev/<replaceable>da4</replaceable>.journal</userinput>
<prompt>#</prompt> <userinput>mount /dev/<replaceable>da4</replaceable>.journal <replaceable>/mnt</replaceable></userinput>
In the case of several slices, a journal will be created for each individual slice. For instance, if <filename>ad4s1</filename> and <filename>ad4s2</filename> are both slices, then <command>gjournal</command> will create <filename>ad4s1.journal</filename> and <filename>ad4s2.journal</filename>.
Journaling may also be enabled on current file systems by using <command>tunefs</command>. However, <emphasis>always</emphasis> make a backup before attempting to alter an existing file system. In most cases, <command>gjournal</command> will fail if it is unable to create the journal, but this does not protect against data loss incurred as a result of misusing <command>tunefs</command>. Refer to <citerefentry><refentrytitle>gjournal</refentrytitle><manvolnum>8</manvolnum></citerefentry> and <citerefentry><refentrytitle>tunefs</refentrytitle><manvolnum>8</manvolnum></citerefentry> for more information about these commands.
It is possible to journal the boot disk of a FreeBSD system. Refer to the article <link xlink:href="@@URL_RELPREFIX@@/doc/en_US.ISO8859-1/articles/gjournal-desktop">Implementing UFS Journaling on a Desktop PC</link> for detailed instructions.
The Z File System (<acronym>ZFS</acronym>)
<personname> <firstname>Allan</firstname> <surname>Jude</surname> </personname> <contrib>Written by </contrib>
<personname> <firstname>Benedict</firstname> <surname>Reuschling</surname> </personname> <contrib>Written by </contrib>
<personname> <firstname>Warren</firstname> <surname>Block</surname> </personname> <contrib>Written by </contrib>
The <emphasis>Z File System</emphasis>, or <acronym>ZFS</acronym>, is an advanced file system designed to overcome many of the major problems found in previous designs.
Originally developed at <trademark>Sun</trademark>, ongoing open source <acronym>ZFS</acronym> development has moved to the <link xlink:href="http://open-zfs.org">OpenZFS Project</link>.
<acronym>ZFS</acronym> has three major design goals:
Data integrity: All data includes a <link linkend="zfs-term-checksum">checksum</link> of the data. When data is written, the checksum is calculated and written along with it. When that data is later read back, the checksum is calculated again. If the checksums do not match, a data error has been detected. <acronym>ZFS</acronym> will attempt to automatically correct errors when data redundancy is available.
Pooled storage: physical storage devices are added to a pool, and storage space is allocated from that shared pool. Space is available to all file systems, and can be increased by adding new storage devices to the pool.
Performance: multiple caching mechanisms provide increased performance. <link linkend="zfs-term-arc">ARC</link> is an advanced memory-based read cache. A second level of disk-based read cache can be added with <link linkend="zfs-term-l2arc">L2ARC</link>, and disk-based synchronous write cache is available with <link linkend="zfs-term-zil">ZIL</link>.
A complete list of features and terminology is shown in <xref linkend="zfs-term"/>.
What Makes <acronym>ZFS</acronym> Different
<acronym>ZFS</acronym> is significantly different from any previous file system because it is more than just a file system. Combining the traditionally separate roles of volume manager and file system provides <acronym>ZFS</acronym> with unique advantages. The file system is now aware of the underlying structure of the disks. Traditional file systems could only be created on a single disk at a time. If there were two disks then two separate file systems would have to be created. In a traditional hardware <acronym>RAID</acronym> configuration, this problem was avoided by presenting the operating system with a single logical disk made up of the space provided by a number of physical disks, on top of which the operating system placed a file system. Even in the case of software <acronym>RAID</acronym> solutions like those provided by <acronym>GEOM</acronym>, the <acronym>UFS</acronym> file system living on top of the <acronym>RAID</acronym> transform believed that it was dealing with a single device. <acronym>ZFS</acronym>'s combination of the volume manager and the file system solves this and allows the creation of many file systems all sharing a pool of available storage. One of the biggest advantages to <acronym>ZFS</acronym>'s awareness of the physical layout of the disks is that existing file systems can be grown automatically when additional disks are added to the pool. This new space is then made available to all of the file systems. <acronym>ZFS</acronym> also has a number of different properties that can be applied to each file system, giving many advantages to creating a number of different file systems and datasets rather than a single monolithic file system.
Quick Start Guide

Loading…

No matching activity found.

Browse all component changes

Things to check

Multiple failing checks

The translations in several languages have failing checks

Reset

Glossary

English English
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect1/para
Flags
read-only
Source string location
book.translate.xml:37483
String age
a year ago
Source string age
a year ago
Translation file
books/handbook.pot, string 6255