Translation

(itstool) path: sect2/para
Run this command on the secondary node, <literal>hastb</literal>:
0/650
Context English Turkish (tr_TR) State
The <citerefentry><refentrytitle>hastd</refentrytitle><manvolnum>8</manvolnum></citerefentry> daemon which provides data synchronization. When this daemon is started, it will automatically load <varname>geom_gate.ko</varname>.
The userland management utility, <citerefentry><refentrytitle>hastctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>.
The <citerefentry><refentrytitle>hast.conf</refentrytitle><manvolnum>5</manvolnum></citerefentry> configuration file. This file must exist before starting <application>hastd</application>.
Users who prefer to statically build <literal>GEOM_GATE</literal> support into the kernel should add this line to the custom kernel configuration file, then rebuild the kernel using the instructions in <xref linkend="kernelconfig"/>:
options GEOM_GATE
The following example describes how to configure two nodes in master-slave/primary-secondary operation using <acronym>HAST</acronym> to replicate the data between the two. The nodes will be called <literal>hasta</literal>, with an <acronym>IP</acronym> address of <literal>172.16.0.1</literal>, and <literal>hastb</literal>, with an <acronym>IP</acronym> address of <literal>172.16.0.2</literal>. Both nodes will have a dedicated hard drive <filename>/dev/ad6</filename> of the same size for <acronym>HAST</acronym> operation. The <acronym>HAST</acronym> pool, sometimes referred to as a resource or the <acronym>GEOM</acronym> provider in <filename>/dev/hast/</filename>, will be called <literal>test</literal>.
Configuration of <acronym>HAST</acronym> is done using <filename>/etc/hast.conf</filename>. This file should be identical on both nodes. The simplest configuration is:
resource <replaceable>test</replaceable> {
on <replaceable>hasta</replaceable> {
local <replaceable>/dev/ad6</replaceable>
remote <replaceable>172.16.0.2</replaceable>
}
on <replaceable>hastb</replaceable> {
local <replaceable>/dev/ad6</replaceable>
remote <replaceable>172.16.0.1</replaceable>
}
}
For more advanced configuration, refer to <citerefentry><refentrytitle>hast.conf</refentrytitle><manvolnum>5</manvolnum></citerefentry>.
It is also possible to use host names in the <literal>remote</literal> statements if the hosts are resolvable and defined either in <filename>/etc/hosts</filename> or in the local <acronym>DNS</acronym>.
Once the configuration exists on both nodes, the <acronym>HAST</acronym> pool can be created. Run these commands on both nodes to place the initial metadata onto the local disk and to start <citerefentry><refentrytitle>hastd</refentrytitle><manvolnum>8</manvolnum></citerefentry>:
<prompt>#</prompt> <userinput>hastctl create <replaceable>test</replaceable></userinput>
<prompt>#</prompt> <userinput>service hastd onestart</userinput>
It is <emphasis>not</emphasis> possible to use <acronym>GEOM</acronym> providers with an existing file system or to convert an existing storage to a <acronym>HAST</acronym>-managed pool. This procedure needs to store some metadata on the provider and there will not be enough required space available on an existing provider.
A HAST node's <literal>primary</literal> or <literal>secondary</literal> role is selected by an administrator, or software like <application>Heartbeat</application>, using <citerefentry><refentrytitle>hastctl</refentrytitle><manvolnum>8</manvolnum></citerefentry>. On the primary node, <literal>hasta</literal>, issue this command:
<prompt>#</prompt> <userinput>hastctl role primary <replaceable>test</replaceable></userinput>
Run this command on the secondary node, <literal>hastb</literal>:
<prompt>#</prompt> <userinput>hastctl role secondary <replaceable>test</replaceable></userinput>
Verify the result by running <command>hastctl</command> on each node:
<prompt>#</prompt> <userinput>hastctl status <replaceable>test</replaceable></userinput>
Check the <literal>status</literal> line in the output. If it says <literal>degraded</literal>, something is wrong with the configuration file. It should say <literal>complete</literal> on each node, meaning that the synchronization between the nodes has started. The synchronization completes when <command>hastctl status</command> reports 0 bytes of <literal>dirty</literal> extents.
The next step is to create a file system on the <acronym>GEOM</acronym> provider and mount it. This must be done on the <literal>primary</literal> node. Creating the file system can take a few minutes, depending on the size of the hard drive. This example creates a <acronym>UFS</acronym> file system on <filename>/dev/hast/test</filename>:
<prompt>#</prompt> <userinput>newfs -U /dev/hast/<replaceable>test</replaceable></userinput>
<prompt>#</prompt> <userinput>mkdir /hast/<replaceable>test</replaceable></userinput>
<prompt>#</prompt> <userinput>mount /dev/hast/<replaceable>test</replaceable> <replaceable>/hast/test</replaceable></userinput>
Once the <acronym>HAST</acronym> framework is configured properly, the final step is to make sure that <acronym>HAST</acronym> is started automatically during system boot. Add this line to <filename>/etc/rc.conf</filename>:
hastd_enable="YES"
Failover Configuration
The goal of this example is to build a robust storage system which is resistant to the failure of any given node. If the primary node fails, the secondary node is there to take over seamlessly, check and mount the file system, and continue to work without missing a single bit of data.
To accomplish this task, the Common Address Redundancy Protocol (<acronym>CARP</acronym>) is used to provide for automatic failover at the <acronym>IP</acronym> layer. <acronym>CARP</acronym> allows multiple hosts on the same network segment to share an <acronym>IP</acronym> address. Set up <acronym>CARP</acronym> on both nodes of the cluster according to the documentation available in <xref linkend="carp"/>. In this example, each node will have its own management <acronym>IP</acronym> address and a shared <acronym>IP</acronym> address of <replaceable>172.16.0.254</replaceable>. The primary <acronym>HAST</acronym> node of the cluster must be the master <acronym>CARP</acronym> node.
The <acronym>HAST</acronym> pool created in the previous section is now ready to be exported to the other hosts on the network. This can be accomplished by exporting it through <acronym>NFS</acronym> or <application>Samba</application>, using the shared <acronym>IP</acronym> address <replaceable>172.16.0.254</replaceable>. The only problem which remains unresolved is an automatic failover should the primary node fail.
In the event of <acronym>CARP</acronym> interfaces going up or down, the FreeBSD operating system generates a <citerefentry><refentrytitle>devd</refentrytitle><manvolnum>8</manvolnum></citerefentry> event, making it possible to watch for state changes on the <acronym>CARP</acronym> interfaces. A state change on the <acronym>CARP</acronym> interface is an indication that one of the nodes failed or came back online. These state change events make it possible to run a script which will automatically handle the HAST failover.
To catch state changes on the <acronym>CARP</acronym> interfaces, add this configuration to <filename>/etc/devd.conf</filename> on each node:
notify 30 {
match "system" "IFNET";
match "subsystem" "carp0";
match "type" "LINK_UP";
action "/usr/local/sbin/carp-hast-switch master";
};

notify 30 {
match "system" "IFNET";
match "subsystem" "carp0";
match "type" "LINK_DOWN";
action "/usr/local/sbin/carp-hast-switch slave";
};

Loading…

No matching activity found.

Browse all component changes

Glossary

English Turkish (tr_TR)
command komut FreeBSD Doc
command komut FreeBSD Doc
command line komut satırı FreeBSD Doc
device node aygıt düğümü FreeBSD Doc
device node aygıt düğümü FreeBSD Doc
node düğüm FreeBSD Doc
run çalıştırmak FreeBSD Doc

Source information

Source string comment
(itstool) path: sect2/para
Source string location
book.translate.xml:37352
String age
8 months ago
Source string age
a year ago
Translation file
books/tr_TR/handbook.po, string 6238