Source string Read only

(itstool) path: sect1/title
Context English State
<prompt>#</prompt> <userinput>kldload geom_stripe</userinput>
Ensure that a suitable mount point exists. If this volume will become a root partition, then temporarily use another mount point such as <filename>/mnt</filename>.
Determine the device names for the disks which will be striped, and create the new stripe device. For example, to stripe two unused and unpartitioned <acronym>ATA</acronym> disks with device names of <filename>/dev/ad2</filename> and <filename>/dev/ad3</filename>:
<prompt>#</prompt> <userinput>gstripe label -v st0 /dev/ad2 /dev/ad3</userinput>
Metadata value stored on /dev/ad2.
Metadata value stored on /dev/ad3.
Write a standard label, also known as a partition table, on the new volume and install the default bootstrap code:
<prompt>#</prompt> <userinput>bsdlabel -wB /dev/stripe/st0</userinput>
This process should create two other devices in <filename>/dev/stripe</filename> in addition to <filename>st0</filename>. Those include <filename>st0a</filename> and <filename>st0c</filename>. At this point, a <acronym>UFS</acronym> file system can be created on <filename>st0a</filename> using <command>newfs</command>:
<prompt>#</prompt> <userinput>newfs -U /dev/stripe/st0a</userinput>
Many numbers will glide across the screen, and after a few seconds, the process will be complete. The volume has been created and is ready to be mounted.
To manually mount the created disk stripe:
<prompt>#</prompt> <userinput>mount /dev/stripe/st0a /mnt</userinput>
To mount this striped file system automatically during the boot process, place the volume information in <filename>/etc/fstab</filename>. In this example, a permanent mount point, named <filename>stripe</filename>, is created:
<prompt>#</prompt> <userinput>mkdir /stripe</userinput>
<prompt>#</prompt> <userinput>echo "/dev/stripe/st0a /stripe ufs rw 2 2" \</userinput>
<userinput>&gt;&gt; /etc/fstab</userinput>
The <filename>geom_stripe.ko</filename> module must also be automatically loaded during system initialization, by adding a line to <filename>/boot/loader.conf</filename>:
<prompt>#</prompt> <userinput>echo 'geom_stripe_load="YES"' &gt;&gt; /boot/loader.conf</userinput>
RAID1 - Mirroring
<primary>Disk Mirroring</primary>
<acronym>RAID1</acronym>, or <emphasis>mirroring</emphasis>, is the technique of writing the same data to more than one disk drive. Mirrors are usually used to guard against data loss due to drive failure. Each drive in a mirror contains an identical copy of the data. When an individual drive fails, the mirror continues to work, providing data from the drives that are still functioning. The computer keeps running, and the administrator has time to replace the failed drive without user interruption.
Two common situations are illustrated in these examples. The first creates a mirror out of two new drives and uses it as a replacement for an existing single drive. The second example creates a mirror on a single new drive, copies the old drive's data to it, then inserts the old drive into the mirror. While this procedure is slightly more complicated, it only requires one new drive.
Traditionally, the two drives in a mirror are identical in model and capacity, but <citerefentry><refentrytitle>gmirror</refentrytitle><manvolnum>8</manvolnum></citerefentry> does not require that. Mirrors created with dissimilar drives will have a capacity equal to that of the smallest drive in the mirror. Extra space on larger drives will be unused. Drives inserted into the mirror later must have at least as much capacity as the smallest drive already in the mirror.
The mirroring procedures shown here are non-destructive, but as with any major disk operation, make a full backup first.
While <citerefentry><refentrytitle>dump</refentrytitle><manvolnum>8</manvolnum></citerefentry> is used in these procedures to copy file systems, it does not work on file systems with soft updates journaling. See <citerefentry><refentrytitle>tunefs</refentrytitle><manvolnum>8</manvolnum></citerefentry> for information on detecting and disabling soft updates journaling.
Metadata Issues
Many disk systems store metadata at the end of each disk. Old metadata should be erased before reusing the disk for a mirror. Most problems are caused by two particular types of leftover metadata: <acronym>GPT</acronym> partition tables and old metadata from a previous mirror.
<acronym>GPT</acronym> metadata can be erased with <citerefentry><refentrytitle>gpart</refentrytitle><manvolnum>8</manvolnum></citerefentry>. This example erases both primary and backup <acronym>GPT</acronym> partition tables from disk <filename>ada8</filename>:
<prompt>#</prompt> <userinput>gpart destroy -F ada8</userinput>
A disk can be removed from an active mirror and the metadata erased in one step using <citerefentry><refentrytitle>gmirror</refentrytitle><manvolnum>8</manvolnum></citerefentry>. Here, the example disk <filename>ada8</filename> is removed from the active mirror <filename>gm4</filename>:
<prompt>#</prompt> <userinput>gmirror remove gm4 ada8</userinput>
If the mirror is not running, but old mirror metadata is still on the disk, use <command>gmirror clear</command> to remove it:
<prompt>#</prompt> <userinput>gmirror clear ada8</userinput>


No matching activity found.

Browse all component changes


English English
No related strings found in the glossary.

Source information

Source string comment
(itstool) path: sect1/title
Source string location
String age
a year ago
Source string age
a year ago
Translation file
books/handbook.pot, string 6314