banner



Which Method Is Used To Change The Value Of The Preferred Node In A Cluster?

Applies to SUSE Linux Enterprise High Availability Extension 12 SP4

eight Configuring and Managing Cluster Resources (Command Line) #Edit source

Abstract#

To configure and manage cluster resources, either use the crm beat (crmsh) command line utility or HA Web Konsole (Hawk2), a Spider web-based user interface.

This chapter introduces crm, the command line tool and covers an overview of this tool, how to utilize templates, and mainly configuring and managing cluster resources: creating basic and advanced types of resources (groups and clones), configuring constraints, specifying failover nodes and failback nodes, configuring resource monitoring, starting, cleaning upwards or removing resources, and migrating resource manually.

Note

Note: User Privileges

Sufficient privileges are necessary to manage a cluster. The crm control and its subcommands need to be run either as root user or as the CRM possessor user (typically the user hacluster).

Notwithstanding, the user option allows y'all to run crm and its subcommands every bit a regular (unprivileged) user and to modify its ID using sudo whenever necessary. For example, with the post-obit command crm will use hacluster as the privileged user ID:

              root #                            crm              options user hacluster

Annotation that you demand to fix upward /etc/sudoers so that sudo does not ask for a password.

The crm control has several subcommands which manage resources, CIBs, nodes, resource agents, and others. Information technology offers a thorough assistance arrangement with embedded examples. All examples follow a naming convention described in Appendix B.

Tip

Tip: Interactive crm Prompt

By using crm without arguments (or with only i sublevel equally statement), the crm shell enters the interactive fashion. This mode is indicated by the following prompt:

For readability reasons, we omit the host name in the interactive crm prompts in our documentation. We only include the host name if y'all need to run the interactive shell on a specific node, like alice for example:

Help can be accessed in several means:

  • To output the usage of crm and its command line options:

  • To give a list of all bachelor commands:

  • To access other help sections, not just the control reference:

  • To view the extensive help text of the configure subcommand:

                          root #                                            crm                      configure aid
  • To print the syntax, its usage, and examples of the group subcommand of configure:

                          root #                                            crm                      configure help grouping

    This is the same:

                          root #                                            crm                      help configure group

Nearly all output of the help subcommand (practice non mix it upward with the --help option) opens a text viewer. This text viewer allows you to scroll upward or downward and read the help text more comfortably. To exit the text viewer, printing the Q key.

Tip

Tip: Utilize Tab Completion in Bash and Interactive Shell

The crmsh supports full tab completion in Bash directly, not merely for the interactive shell. For instance, typing crm assist config →| will complete the word like in the interactive shell.

The crm command itself tin be used in the post-obit ways:

  • Directly: Concatenate all subcommands to crm, press Enter and y'all see the output immediately. For case, enter crm help ra to go information about the ra subcommand (resource agents).

    It is possible to abbreviate subcommands every bit long as they are unique. For example, you tin shorten status equally st and crmsh will know what you have meant.

    Some other feature is to shorten parameters. Usually, you add together parameters through the params keyword. You can leave out the params section if information technology is the first and only section. For example, this line:

                          root #                                            crm                      primitive ipaddr IPaddr2 params ip=192.168.0.55

    is equivalent to this line:

                          root #                                            crm                      primitive ipaddr IPaddr2 ip=192.168.0.55
  • Every bit crm Shell Script: Crm beat out scripts contain subcommands of crm. For more information, run into Section 8.1.4, "Using crmsh's Beat out Scripts".

  • As crmsh Cluster Scripts:These are a collection of metadata, references to RPM packages, configuration files, and crmsh subcommands bundled nether a single, even so descriptive proper noun. They are managed through the crm script command.

    Practice not confuse them with crmsh shell scripts: although both share some common objectives, the crm shell scripts just contain subcommands whereas cluster scripts incorporate much more than a simple enumeration of commands. For more information, see Section 8.1.5, "Using crmsh'due south Cluster Scripts".

  • Interactive as Internal Shell: Type crm to enter the internal shell. The prompt changes to crm(live). With aid you lot tin get an overview of the available subcommands. Every bit the internal vanquish has different levels of subcommands, yous tin can "enter" 1 by typing this subcommand and press Enter.

    For example, if you type resources you enter the resource direction level. Your prompt changes to crm(live)resource#. If yous want to leave the internal shell, use the commands quit, cheerio, or leave. If you need to go one level back, use dorsum, up, cease, or cd.

    You can enter the level direct past typing crm and the respective subcommand(s) without whatever options and printing Enter.

    The internal shell supports also tab completion for subcommands and resource. Type the beginning of a control, printing →| and crm completes the respective object.

In improver to previously explained methods, crmsh also supports synchronous command execution. Employ the -w option to actuate it. If yous have started crm without -w, you can enable it afterward with the user preference's await set to yes (options wait yes). If this option is enabled, crm waits until the transition is finished. Whenever a transaction is started, dots are printed to betoken progress. Synchronous command execution is merely applicable for commands like resource start.

Note

Note: Differentiate Between Management and Configuration Subcommands

The crm tool has management capability (the subcommands resource and node) and tin can be used for configuration (cib, configure).

The post-obit subsections give y'all an overview of some of import aspects of the crm tool.

viii.1.iii Displaying Information almost OCF Resources Agents #Edit source

As y'all need to deal with resource agents in your cluster configuration all the fourth dimension, the crm tool contains the ra command. Utilize it to show information about resource agents and to manage them (for additional data, see also Section 6.3.2, "Supported Resource Agent Classes"):

                root #                                crm                ra                crm(alive)ra#                              

The command classes lists all classes and providers:

                crm(live)ra#                                classes                lsb  ocf / heartbeat linbit lvm2 ocfs2 pacemaker  service  stonith  systemd

To become an overview of all available resource agents for a class (and provider) apply the list command:

                crm(alive)ra#                                listing                ocf AoEtarget           AudibleAlarm        CTDB                ClusterMon Delay               Dummy               EvmsSCC             Evmsd Filesystem          HealthCPU           HealthSMART         ICP IPaddr              IPaddr2             IPsrcaddr           IPv6addr LVM                 LinuxSCSI           MailTo              ManageRAID ManageVE            Pure-FTPd           Raid1               Route SAPDatabase         SAPInstance         SendArp             ServeRAID ...

An overview of a resource agent can be viewed with info:

                crm(live)ra#                                info                ocf:linbit:drbd This resource agent manages a DRBD* resource as a master/slave resource. DRBD is a shared-nothing replicated storage device. (ocf:linbit:drbd)  Chief/Slave OCF Resource Agent for DRBD  Parameters (* denotes required, [] the default):  drbd_resource* (string): drbd resource name     The name of the drbd resources from the drbd.conf file.  drbdconf (string, [/etc/drbd.conf]): Path to drbd.conf     Full path to the drbd.conf file.  Operations' defaults (advisory minimum):      start         timeout=240     promote       timeout=90      demote        timeout=xc      notify        timeout=90      stop          timeout=100     monitor_Slave_0 interval=20 timeout=20 start-delay=1m     monitor_Master_0 interval=10 timeout=20 offset-filibuster=1m

Get out the viewer past pressing Q.

Tip

Tip: Utilize crm Directly

In the former example we used the internal shell of the crm command. However, you exercise not necessarily demand to utilise it. You lot get the same results if you add the respective subcommands to crm. For example, you can list all the OCF resource agents by entering crm ra listing ocf in your beat out.

The crmsh shell scripts provide a convenient mode to enumerate crmsh subcommands into a file. This makes it like shooting fish in a barrel to comment specific lines or to replay them subsequently. Keep in mind that a crmsh shell script tin contain only crmsh subcommands . Any other commands are not allowed.

Before you lot can use a crmsh shell script, create a file with specific commands. For example, the following file prints the status of the cluster and gives a list of all nodes:

Example 8.one: A Elementary crmsh Beat out Script #

# A small example file with some crm subcommands                  condition                  node                  list

Any line starting with the hash symbol (#) is a comment and is ignored. If a line is too long, insert a backslash (\) at the finish and go on in the next line. It is recommended to indent lines that belong to a certain subcommand to amend readability.

To use this script, use one of the following methods:

                root #                                crm                -f example.cli                root #                                crm                < example.cli

Collecting information from all cluster nodes and deploying whatever changes is a key cluster assistants chore. Instead of performing the same procedures manually on dissimilar nodes (which is error-prone), you lot can use the crmsh cluster scripts.

Do not confuse them with the crmsh shell scripts , which are explained in Section 8.ane.four, "Using crmsh'south Beat Scripts".

In contrast to crmsh shell scripts, cluster scripts performs boosted tasks like:

  • Installing software that is required for a specific task.

  • Creating or modifying any configuration files.

  • Collecting information and reporting potential problems with the cluster.

  • Deploying the changes to all nodes.

crmsh cluster scripts practise not replace other tools for managing clusters—they provide an integrated way to perform the above tasks across the cluster. Find detailed data at http://crmsh.github.io/scripts/.

To go a list of all available cluster scripts, run:

To view the components of a script, utilise the show command and the proper name of the cluster script, for example:

                  root #                                    crm                  script show mailto mailto (Basic) MailTo   This is a resource amanuensis for MailTo. Information technology sends electronic mail to a sysadmin whenever  a takeover occurs.  1. Notifies recipients by e-mail in the outcome of resources takeover    id (required)  (unique)       Identifier for the cluster resource   email (required)       Email address   subject       Subject

The output of show contains a championship, a brusque description, and a procedure. Each procedure is divided into a series of steps, performed in the given order.

Each step contains a list of required and optional parameters, along with a short description and its default value.

Each cluster script understands a gear up of common parameters. These parameters can be passed to any script:

Table 8.1: Common Parameters #

Parameter Argument Description
activity Index If ready, simply execute a single activeness (index, equally returned past verify)
dry_run BOOL If fix, simulate execution only (default: no)
nodes LIST List of nodes to execute the script for
port NUMBER Port to connect to
statefile FILE When single-stepping, the state is saved in the given file
sudo BOOL If prepare, crm volition prompt for a sudo password and use sudo where appropriate (default: no)
timeout NUMBER Execution timeout in seconds (default: 600)
user USER Run script every bit the given user

Before running a cluster script, review the deportment that information technology will perform and verify its parameters to avoid bug. A cluster script can potentially perform a series of actions and may fail for various reasons. Thus, verifying your parameters earlier running it helps to avoid problems.

For example, the mailto resource agent requires a unique identifier and an east-mail address. To verify these parameters, run:

                  root #                                    crm                  script verify mailto id=sysadmin email=tux@example.org 1. Ensure mail packet is installed          mailx  2. Configure cluster resources          primitive sysadmin MailTo                 email="tux@example.org"                 op start timeout="10"                 op stop timeout="ten"                 op monitor interval="ten" timeout="10"          clone c-sysadmin sysadmin

The verify prints the steps and replaces any placeholders with your given parameters. If verify finds whatsoever problems, it will report information technology. If everything is ok, supervene upon the verify command with run:

                  root #                                    crm                  script run mailto id=sysadmin email=tux@instance.org INFO: MailTo INFO: Nodes: alice, bob OK: Ensure post package is installed OK: Configure cluster resources

Cheque whether your resource is integrated into your cluster with crm status:

                  root #                                    crm                  status [...]  Clone Set: c-sysadmin [sysadmin]      Started: [ alice bob ]

Configuration templates are set-made cluster configurations for crmsh. Do not confuse them with the resources templates (equally described in Department eight.4.3, "Creating Resource Templates"). Those are templates for the cluster and non for the crm shell.

Configuration templates require minimum endeavor to exist tailored to the particular user'southward needs. Whenever a template creates a configuration, warning messages give hints which can be edited later for farther customization.

The following procedure shows how to create a simple yet functional Apache configuration:

  1. Log in every bit root and start the crm interactive vanquish:

  2. Create a new configuration from a configuration template:

    1. Switch to the template subcommand:

                                crm(live)configure#                                                    template                        
    2. List the available configuration templates:

                                crm(live)configure template#                                                    list                          templates gfs2-base   filesystem  virtual-ip  apache   clvm     ocfs2    gfs2
    3. Decide which configuration template you need. As we need an Apache configuration, nosotros select the apache template and name it thousand-intranet:

                                crm(live)configure template#                                                    new                          k-intranet apache INFO: pulling in template apache INFO: pulling in template virtual-ip
  3. Define your parameters:

    1. List the configuration you take created:

                                crm(alive)configure template#                                                    list                          1000-intranet
    2. Display the minimum required changes that need to be filled out by you:

                                crm(alive)configure template#                                                    show                          ERROR: 23: required parameter ip not gear up Error: 61: required parameter id not set ERROR: 65: required parameter configfile non set
    3. Invoke your preferred text editor and make full out all lines that take been displayed as errors in Step 3.b:

                                crm(live)configure template#                                                    edit                        
  4. Evidence the configuration and check whether information technology is valid (bold text depends on the configuration you accept entered in Footstep 3.c):

                          crm(live)configure template#                                            show                      archaic virtual-ip ocf:heartbeat:IPaddr \     params ip=                        "192.168.i.101"                                            primitive apache apache \     params configfile=                        "/etc/apache2/httpd.conf"                                            monitor apache 120s:60s group                                              g-intranet                                            \     apache virtual-ip
  5. Use the configuration:

                          crm(live)configure template#                                            apply                      crm(live)configure#                                            cd ..                      crm(live)configure#                                            show                    
  6. Submit your changes to the CIB:

                          crm(alive)configure#                                            commit                    

Information technology is possible to simplify the commands even more, if you know the details. The above procedure can be summarized with the post-obit command on the crush:

                root #                                crm                configure template \    new 1000-intranet apache params \    configfile="/etc/apache2/httpd.conf" ip="192.168.i.101"

If y'all are inside your internal crm shell, utilize the following command:

                crm(live)configure template#                                new                intranet apache params \    configfile="/etc/apache2/httpd.conf" ip="192.168.1.101"

Yet, the previous command but creates its configuration from the configuration template. It does not employ nor commit it to the CIB.

A shadow configuration is used to test different configuration scenarios. If y'all have created several shadow configurations, you can test them one by one to run across the effects of your changes.

The usual process looks similar this:

  1. Log in equally root and offset the crm interactive shell:

  2. Create a new shadow configuration:

                          crm(live)configure#                                            cib                      new myNewConfig INFO: myNewConfig shadow CIB created

    If you omit the name of the shadow CIB, a temporary name @tmp@ is created.

  3. If you want to copy the electric current live configuration into your shadow configuration, use the following command, otherwise skip this stride:

    crm(myNewConfig)#                      cib                      reset myNewConfig

    The previous control makes information technology easier to modify any existing resources later.

  4. Make your changes as usual. After you have created the shadow configuration, all changes get there. To save all your changes, use the following command:

  5. If you demand the alive cluster configuration once more, switch back with the post-obit control:

    crm(myNewConfig)configure#                      cib                      apply live                      crm(alive)#                                          

Before loading your configuration changes dorsum into the cluster, it is recommended to review your changes with ptest. The ptest command can prove a diagram of deportment that will be induced past committing the changes. Yous need the graphviz bundle to brandish the diagrams. The following example is a transcript, calculation a monitor operation:

                root #                                crm                configure                crm(live)configure#                                show                contend-bob  primitive fence-bob stonith:apcsmart \         params hostlist="bob"                crm(live)configure#                                monitor                fence-bob 120m:60s                crm(alive)configure#                                show                changed primitive fence-bob stonith:apcsmart \         params hostlist="bob" \         op monitor interval="120m" timeout="60s"                crm(live)configure#                                                  ptest                                crm(live)configure#                commit

To output a cluster diagram, use the command crm configure graph. It displays the electric current configuration on its electric current window, therefore requiring X11.

If y'all prefer Scalable Vector Graphics (SVG), use the following command:

                root #                                crm                configure graph dot config.svg svg

Corosync is the underlying messaging layer for most HA clusters. The corosync subcommand provides commands for editing and managing the Corosync configuration.

For example, to listing the status of the cluster, use status:

              root #                            crm              corosync status Press ring status. Local node ID 175704363 Band ID 0         id      = 10.121.9.43         status  = ring 0 active with no faults Quorum information ------------------ Engagement:             Thu May  8 16:41:56 2014 Quorum provider:  corosync_votequorum Nodes:            2 Node ID:          175704363 Ring ID:          4032 Quorate:          Yeah  Votequorum information ---------------------- Expected votes:   2 Highest expected: two Full votes:      2 Quorum:           2   Flags:            Quorate   Membership data ----------------------     Nodeid      Votes Proper noun  175704363          1 alice.example.com (local)  175704619          i bob.example.com

The diff command is very helpful: It compares the Corosync configuration on all nodes (if not stated otherwise) and prints the difference betwixt:

              root #                            crm              corosync unequal --- bob +++ alice @@ -46,2 +46,two @@ -       expected_votes: 2 -       two_node: 1 +       expected_votes: 1 +       two_node: 0

For more details, come across http://crmsh.nongnu.org/crm.8.html#cmdhelp_corosync.

Global cluster options control how the cluster behaves when confronted with sure situations. The predefined values can usually exist kept. However, to make key functions of your cluster work correctly, y'all demand to adjust the post-obit parameters after basic cluster setup:

Procedure 8.i: Modifying Global Cluster Options With crm #

  1. Log in every bit root and start the crm tool:

  2. Use the post-obit commands to set the options for two-node clusters only:

                          crm(live)configure#                                            property                      no-quorum-policy=cease                      crm(live)configure#                                            holding                      stonith-enabled=truthful

    Important

    Important: No Support Without STONITH

    A cluster without STONITH is non supported.

  3. Show your changes:

                          crm(alive)configure#                                            show                      belongings $id="cib-bootstrap-options" \    dc-version="1.i.one-530add2a3721a0ecccb24660a97dbfdaa3e68f51" \    cluster-infrastructure="corosync" \    expected-quorum-votes="two" \    no-quorum-policy="end" \    stonith-enabled="true"
  4. Commit your changes and exit:

                          crm(live)configure#                                            commit                      crm(live)configure#                                            exit                    

As a cluster administrator, you need to create cluster resource for every resources or application y'all run on servers in your cluster. Cluster resource can include Web sites, e-mail servers, databases, file systems, virtual machines, and any other server-based applications or services you want to brand available to users at all times.

For an overview of resource types yous tin can create, refer to Department 6.3.three, "Types of Resources".

Parts or all of the configuration tin be loaded from a local file or a network URL. Iii different methods can exist defined:

replace

This option replaces the electric current configuration with the new source configuration.

update

This option tries to import the source configuration. Information technology adds new items or updates existing items to the current configuration.

push

This option imports the content from the source into the current configuration (same equally update). Even so, it removes objects that are not available in the new configuration.

To load the new configuration from the file mycluster-config.txt use the post-obit syntax:

                root #                                crm                configure load push mycluster-config.txt

There are three types of RAs (Resource Agents) bachelor with the cluster (for groundwork information, see Section vi.iii.two, "Supported Resource Agent Classes"). To add a new resource to the cluster, keep as follows:

  1. Log in as root and start the crm tool:

  2. Configure a primitive IP address:

                          crm(alive)configure#                                            primitive                      myIP IPaddr \      params ip=127.0.0.99 op monitor interval=60s

    The previous command configures a "primitive" with the proper name myIP. You need to choose a class (here ocf), provider (heartbeat), and blazon (IPaddr). Furthermore, this archaic expects other parameters like the IP address. Modify the accost to your setup.

  3. Brandish and review the changes y'all have fabricated:

  4. Commit your changes to take effect:

                          crm(live)configure#                                            commit                    

If you want to create several resources with like configurations, a resource template simplifies the task. See likewise Section vi.5.iii, "Resource Templates and Constraints" for some bones background information. Exercise not misfile them with the "normal" templates from Section viii.1.6, "Using Configuration Templates". Use the rsc_template command to get familiar with the syntax:

                root #                                crm                configure rsc_template usage: rsc_template <proper noun> [<form>:[<provider>:]]<type>         [params <param>=<value> [<param>=<value>...]]         [meta <aspect>=<value> [<attribute>=<value>...]]         [utilization <attribute>=<value> [<attribute>=<value>...]]         [operations id_spec             [op op_type [<attribute>=<value>...] ...]]

For case, the following command creates a new resource template with the proper name BigVM derived from the ocf:heartbeat:Xen resources and some default values and operations:

                crm(live)configure#                                rsc_template                BigVM ocf:heartbeat:Xen \    params allow_mem_management="true" \    op monitor timeout=60s interval=15s \    op stop timeout=10m \    op start timeout=10m

Once you defined the new resource template, you can use it in primitives or reference it in lodge, colocation, or rsc_ticket constraints. To reference the resource template, apply the @ sign:

                crm(live)configure#                                archaic                MyVM1 @BigVM \    params xmfile="/etc/xen/shared-vm/MyVM1" name="MyVM1"

The new primitive MyVM1 is going to inherit everything from the BigVM resources templates. For case, the equivalent of the in a higher place ii would be:

                crm(live)configure#                                archaic                MyVM1 Xen \    params xmfile="/etc/xen/shared-vm/MyVM1" name="MyVM1" \    params allow_mem_management="true" \    op monitor timeout=60s interval=15s \    op stop timeout=10m \    op start timeout=10m

If you want to overwrite some options or operations, add them to your (archaic) definition. For example, the following new archaic MyVM2 doubles the timeout for monitor operations but leaves others untouched:

                crm(live)configure#                                primitive                MyVM2 @BigVM \    params xmfile="/etc/xen/shared-vm/MyVM2" proper name="MyVM2" \    op monitor timeout=120s interval=30s

A resource template may be referenced in constraints to stand for all primitives which are derived from that template. This helps to produce a more curtailed and clear cluster configuration. Resources template references are allowed in all constraints except location constraints. Colocation constraints may not contain more than one template reference.

From the crm perspective, a STONITH device is just another resource. To create a STONITH resource, proceed as follows:

  1. Log in equally root and start the crm interactive beat out:

  2. Go a list of all STONITH types with the following control:

                          crm(live)#                                            ra                      list stonith apcmaster                  apcmastersnmp              apcsmart baytech                    bladehpi                   cyclades drac3                      external/drac5             external/dracmc-telnet external/hetzner           external/hmchttp           external/ibmrsa external/ibmrsa-telnet     external/ipmi              external/ippower9258 external/kdumpcheck        external/libvirt           external/nut external/rackpdu           external/riloe             external/sbd external/vcenter           external/vmware            external/xen0 external/xen0-ha           fence_legacy               ibmhmc ipmilan                    meatware                   nw_rpc100s rcd_serial                 rps10                      suicide wti_mpc                    wti_nps
  3. Choose a STONITH type from the above listing and view the list of possible options. Employ the following command:

                          crm(live)#                                            ra                      info stonith:external/ipmi IPMI STONITH external device (stonith:external/ipmi)  ipmitool based power management. Apparently, the power off method of ipmitool is intercepted past ACPI which then makes a regular shutdown. If instance of a separate encephalon on a two-node it may happen that no node survives. For two-node clusters use only the reset method.  Parameters (* denotes required, [] the default):  hostname (string): Hostname     The proper noun of the host to be managed by this STONITH device. ...
  4. Create the STONITH resource with the stonith class, the type you have called in Step iii, and the respective parameters if needed, for instance:

                          crm(live)#                                            configure                      crm(live)configure#                                            primitive                      my-stonith stonith:external/ipmi \     params hostname="alice" \     ipaddr="192.168.1.221" \     userid="admin" passwd="secret" \     op monitor interval=60m timeout=120s

Having all the resources configured is only one function of the task. Even if the cluster knows all needed resources, it might even so not exist able to handle them correctly. For example, try non to mountain the file system on the slave node of DRBD (in fact, this would fail with DRBD). Define constraints to make these kind of information available to the cluster.

For more than information about constraints, encounter Department half dozen.5, "Resource Constraints".

The location command defines on which nodes a resource may exist run, may not be run or is preferred to be run.

This type of constraint may exist added multiple times for each resource. All location constraints are evaluated for a given resource. A elementary example that expresses a preference to run the resource fs1 on the node with the name alice to 100 would be the following:

                  crm(live)configure#                                    location                  loc-fs1 fs1 100: alice

Another instance is a location with ping:

                  crm(live)configure#                                    primitive                  ping ping \     params name=ping dampen=5s multiplier=100 host_list="r1 r2"                  crm(live)configure#                                    clone                  cl-ping ping meta interleave=true                  crm(live)configure#                                    location                  loc-node_pref internal_www \     dominion 50: #uname eq alice \     dominion ping: divers ping

The parameter host_list is a infinite-separated listing of hosts to ping and count. Another use case for location constraints are grouping primitives as a resource set up . This tin be useful if several resources depend on, for instance, a ping aspect for network connectivity. In former times, the -inf/ping rules needed to be duplicated several times in the configuration, making it unnecessarily circuitous.

The following example creates a resource fix loc-alice, referencing the virtual IP addresses vip1 and vip2:

                  crm(live)configure#                                    primitive                  vip1 IPaddr2 params ip=192.168.ane.five                  crm(live)configure#                                    primitive                  vip2 IPaddr2 params ip=192.168.one.vi                  crm(live)configure#                                    location                  loc-alice { vip1 vip2 } inf: alice

In some cases it is much more efficient and convenient to apply resource patterns for your location control. A resource pattern is a regular expression between two slashes. For example, the higher up virtual IP addresses can exist all matched with the post-obit:

                  crm(live)configure#                                    location                  loc-alice /vip.*/ inf: alice

The colocation command is used to ascertain what resources should run on the same or on different hosts.

It is simply possible to ready a score of either +inf or -inf, defining resource that must ever or must never run on the same node. It is also possible to use non-infinite scores. In that case the colocation is called informational and the cluster may decide not to follow them in favor of not stopping other resources if there is a conflict.

For example, to run the resource with the IDs filesystem_resource and nfs_group always on the same host, utilise the post-obit constraint:

                  crm(live)configure#                                    colocation                  nfs_on_filesystem inf: nfs_group filesystem_resource

For a master slave configuration, it is necessary to know if the electric current node is a master in addition to running the resource locally.

8.four.5.3 Collocating Sets for Resource Without Dependency #Edit source

Sometimes it is useful to be able to place a group of resources on the same node (defining a colocation constraint), just without having hard dependencies between the resources.

Use the command weak-bond if you want to place resources on the same node, but without any activeness if one of them fails.

                  root #                                    crm                  configure help weak-bond RES1 RES2

The implementation of weak-bail creates a dummy resources and a colocation constraint with the given resources automatically.

The order control defines a sequence of action.

Sometimes it is necessary to provide an order of resource actions or operations. For case, you cannot mount a file organization earlier the device is available to a system. Ordering constraints can be used to get-go or stop a service correct before or later on a different resources meets a special condition, such as being started, stopped, or promoted to master.

Utilize the following command in the crm beat out to configure an ordering constraint:

                  crm(live)configure#                                    guild                  nfs_after_filesystem mandatory: filesystem_resource nfs_group

eight.4.5.v Constraints for the Example Configuration #Edit source

The case used for this section would not work without additional constraints. It is essential that all resources run on the same auto as the principal of the DRBD resources. The DRBD resource must be primary before any other resource starts. Trying to mount the DRBD device when it is not the master simply fails. The following constraints must exist fulfilled:

  • The file system must always be on the same node every bit the primary of the DRBD resources.

                            crm(alive)configure#                                                colocation                        filesystem_on_master inf: \     filesystem_resource drbd_resource:Chief
  • The NFS server and the IP address must exist on the aforementioned node as the file system.

                            crm(live)configure#                                                colocation                        nfs_with_fs inf: \    nfs_group filesystem_resource
  • The NFS server and the IP accost commencement later on the file organisation is mounted:

                            crm(live)configure#                                                order                        nfs_second mandatory: \    filesystem_resource:start nfs_group
  • The file system must exist mounted on a node after the DRBD resource is promoted to primary on this node.

                            crm(alive)configure#                                                society                        drbd_first inf: \     drbd_resource:promote filesystem_resource:beginning

To determine a resource failover, use the meta attribute migration-threshold. In instance failcount exceeds migration-threshold on all nodes, the resources will remain stopped. For example:

                crm(alive)configure#                                location                rsc1-alice rsc1 100: alice

Normally, rsc1 prefers to run on alice. If it fails there, migration-threshold is checked and compared to the failcount. If failcount >= migration-threshold and then it is migrated to the node with the next best preference.

Start failures set up the failcount to inf depend on the start-failure-is-fatal selection. Stop failures cause fencing. If there is no STONITH divers, the resource volition not migrate.

For an overview, refer to Section 6.5.4, "Failover Nodes".

8.four.7 Specifying Resource Failback Nodes (Resource Stickiness) #Edit source

A resource might fail back to its original node when that node is dorsum online and in the cluster. To forestall a resource from failing back to the node that it was running on, or to specify a different node for the resources to fail back to, change its resource stickiness value. Yous can either specify resources stickiness when you are creating a resources or afterward.

For an overview, refer to Section half dozen.5.5, "Failback Nodes".

eight.4.eight Configuring Placement of Resource Based on Load Touch #Edit source

Some resources may take specific chapters requirements such equally minimum amount of memory. Otherwise, they may fail to start completely or run with degraded functioning.

To have this into business relationship, the High Availability Extension allows you to specify the following parameters:

  1. The capacity a sure node provides .

  2. The capacity a sure resource requires .

  3. An overall strategy for placement of resources.

For detailed background information about the parameters and a configuration instance, refer to Section 6.5.six, "Placing Resources Based on Their Load Impact".

To configure the resources'south requirements and the capacity a node provides, use utilization attributes. Y'all can proper name the utilization attributes according to your preferences and define every bit many name/value pairs equally your configuration needs. In sure cases, some agents update the utilization themselves, for example the VirtualDomain.

In the following example, nosotros assume that you already have a basic configuration of cluster nodes and resources. You at present additionally want to configure the capacities a sure node provides and the capacity a certain resource requires.

Process 8.2: Adding Or Modifying Utilization Attributes With crm #

  1. Log in as root and start the crm interactive trounce:

  2. To specify the chapters a node provides , use the post-obit control and replace the placeholder NODE_1 with the name of your node:

                            crm(live)configure#                                                node                        NODE_1                        utilization memory=16384 cpu=8

    With these values, NODE_1 would be assumed to provide 16GB of retentiveness and 8 CPU cores to resource.

  3. To specify the capacity a resources requires , apply:

                            crm(live)configure#                                                primitive                        xen1 Xen ... \      utilization retentivity=4096 cpu=4

    This would make the resource consume 4096 of those retention units from NODE_1, and 4 of the CPU units.

  4. Configure the placement strategy with the property command:

                            crm(live)configure#                                                holding                        ...

    The post-obit values are bachelor:

    default (default value)

    Utilization values are not considered. Resource are allocated according to location scoring. If scores are equal, resource are evenly distributed beyond nodes.

    utilization

    Utilization values are considered when deciding if a node has enough gratuitous capacity to satisfy a resources's requirements. Even so, load-balancing is still done based on the number of resources allocated to a node.

    minimal

    Utilization values are considered when deciding if a node has enough free chapters to satisfy a resource'southward requirements. An attempt is made to concentrate the resources on as few nodes as possible (to attain ability savings on the remaining nodes).

    counterbalanced

    Utilization values are considered when deciding if a node has enough free chapters to satisfy a resources'due south requirements. An attempt is fabricated to distribute the resources evenly, thus optimizing resource performance.

    Note

    Notation: Configuring Resource Priorities

    The available placement strategies are best-try—they do not yet utilize circuitous heuristic solvers to e'er reach optimum allocation results. Ensure that resource priorities are properly gear up and then that your most of import resources are scheduled first.

  5. Commit your changes earlier leaving crmsh:

                            crm(alive)configure#                                                commit                      

The post-obit example demonstrates a three node cluster of equal nodes, with 4 virtual machines:

                crm(live)configure#                                node                alice utilization retentiveness="4000"                crm(live)configure#                                node                bob utilization retentivity="4000"                crm(live)configure#                                node                charlie utilization memory="4000"                crm(live)configure#                                primitive                xenA Xen \     utilization hv_memory="3500" meta priority="10" \     params xmfile="/etc/xen/shared-vm/vm1"                crm(live)configure#                                primitive                xenB Xen \     utilization hv_memory="2000" meta priority="one" \     params xmfile="/etc/xen/shared-vm/vm2"                crm(live)configure#                                primitive                xenC Xen \     utilization hv_memory="2000" meta priority="i" \     params xmfile="/etc/xen/shared-vm/vm3"                crm(alive)configure#                                primitive                xenD Xen \     utilization hv_memory="g" meta priority="five" \     params xmfile="/etc/xen/shared-vm/vm4"                crm(live)configure#                                property                placement-strategy="minimal"

With all three nodes upwardly, xenA volition be placed onto a node get-go, followed by xenD. xenB and xenC would either be allocated together or i of them with xenD.

If one node failed, too little total retentivity would be available to host them all. xenA would exist ensured to be allocated, as would xenD. However, only one of xenB or xenC could notwithstanding be placed, and since their priority is equal, the event is not defined still. To resolve this ambivalence as well, y'all would need to set a higher priority for either ane.

To monitor a resource, there are 2 possibilities: either define a monitor operation with the op keyword or use the monitor command. The following instance configures an Apache resource and monitors it every lx seconds with the op keyword:

                crm(live)configure#                                primitive                apache apache \   params ... \                                  op monitor interval=60s timeout=30s                              

The aforementioned tin be done with:

                crm(live)configure#                                primitive                apache apache \    params ...                crm(live)configure#                                monitor                apache 60s:30s

For an overview, refer to Department six.iv, "Resource Monitoring".

Ane of the most common elements of a cluster is a fix of resource that needs to be located together. Showtime sequentially and stop in the reverse order. To simplify this configuration we back up the concept of groups. The following example creates ii primitives (an IP address and an eastward-mail service resource):

  1. Run the crm command as system administrator. The prompt changes to crm(alive).

  2. Configure the primitives:

                          crm(live)#                                            configure                      crm(live)configure#                                            primitive                      Public-IP ocf:heartbeat:IPaddr \    params ip=one.2.iii.iv id= Public-IP                      crm(alive)configure#                                            archaic                      Electronic mail systemd:postfix \    params id=Email
  3. Group the primitives with their relevant identifiers in the correct order:

                          crm(live)configure#                                            group                      1000-mailsvc Public-IP E-mail

To alter the order of a group member, use the modgroup command from the configure subcommand. Use the following commands to movement the primitive Email before Public-IP. (This is just to demonstrate the feature):

                crm(alive)configure#                                modgroup                1000-mailsvc add Electronic mail before Public-IP

To remove a resource from a grouping (for case, E-mail), use this command:

                crm(live)configure#                                modgroup                m-mailsvc remove Email

For an overview, refer to Section 6.3.5.1, "Groups".

Clones were initially conceived equally a user-friendly way to get-go N instances of an IP resource and take them distributed throughout the cluster for load balancing. They accept turned out to be useful for several other purposes, including integrating with DLM, the fencing subsystem and OCFS2. Y'all can clone any resource, provided the resource agent supports it.

Larn more than about cloned resources in Section vi.3.5.two, "Clones".

To create an anonymous clone resource, first create a primitive resources and then refer to information technology with the clone command. Do the following:

  1. Log in as root and first the crm interactive shell:

  2. Configure the archaic, for example:

                            crm(live)configure#                                                archaic                        Apache apache
  3. Clone the primitive:

                            crm(live)configure#                                                clone                        cl-apache Apache

8.4.11.2 Creating Stateful/Multi-Country Clone Resource #Edit source

Multi-state resource are a specialization of clones. This blazon allows the instances to be in one of two operating modes, be information technology active/passive, principal/secondary, or master/slave.

To create a stateful clone resource, start create a primitive resources and then the multi-state resource. The multi-state resource must back up at least promote and demote operations.

  1. Log in as root and start the crm interactive vanquish:

  2. Configure the primitive. Change the intervals if needed:

                            crm(live)configure#                                                primitive                        my-rsc ocf:myCorp:myAppl \     op monitor interval=lx \     op monitor interval=61 role=Chief
  3. Create the multi-land resources:

                            crm(alive)configure#                                                ms                        ms-rsc my-rsc

Apart from the possibility to configure your cluster resources, the crm tool likewise allows you to manage existing resources. The following subsections gives you an overview.

When administering a cluster the command crm configure show lists the current CIB objects like cluster configuration, global options, primitives, and others:

                root #                                crm                configure show node 178326192: alice node 178326448: bob primitive admin_addr IPaddr2 \         params ip=192.168.2.1 \         op monitor interval=10 timeout=20 archaic stonith-sbd stonith:external/sbd \         params pcmk_delay_max=30 belongings cib-bootstrap-options: \         take-watchdog=truthful \         dc-version=1.1.15-17.1-e174ec8 \         cluster-infrastructure=corosync \         cluster-name=hacluster \         stonith-enabled=truthful \         placement-strategy=balanced \         standby-style=true rsc_defaults rsc-options: \         resource-stickiness=1 \         migration-threshold=iii op_defaults op-options: \         timeout=600 \         record-awaiting=true

In case y'all take lots of resources, the output of prove is likewise verbose. To restrict the output, use the name of the resource. For example, to list the properties of the primitive admin_addr only, suspend the resource name to show:

                root #                                crm                configure testify admin_addr primitive admin_addr IPaddr2 \         params ip=192.168.2.i \         op monitor interval=10 timeout=20

Nevertheless, in some cases, you want to limit the output of specific resources even more. This tin be achieved with filters . Filters limit the output to specific components. For example, to list the nodes only, apply type:node:

                root #                                crm                configure show blazon:node node 178326192: alice node 178326448: bob

In case you are also interested in primitives, use the or operator:

                root #                                crm                configure show blazon:node or blazon:primitive node 178326192: alice node 178326448: bob primitive admin_addr IPaddr2 \         params ip=192.168.two.ane \         op monitor interval=ten timeout=20 primitive stonith-sbd stonith:external/sbd \         params pcmk_delay_max=thirty

Furthermore, to search for an object that starts with a sure string, use this notation:

                root #                                crm                configure show type:primitive and and 'admin*' primitive admin_addr IPaddr2 \         params ip=192.168.two.1 \         op monitor interval=10 timeout=twenty

To list all bachelor types, enter crm configure show type: and printing the →| key. The Bash completion will give you a list of all types.

To start a new cluster resource yous need the respective identifier. Proceed as follows:

  1. Log in as root and start the crm interactive shell:

  2. Switch to the resource level:

  3. Start the resource with start and press the →| central to testify all known resources:

                          crm(live)resource#                                            beginningID                    

A resource will be automatically restarted if it fails, but each failure raises the resources's failcount. If a migration-threshold has been set for that resource, the node will no longer be allowed to run the resource when the number of failures has reached the migration threshold.

  1. Open up a shell and log in as user root.

  2. Become a listing of all your resources:

                          root #                                            crm                      resource list   ... Resource Group: dlm-clvm:1          dlm:one  (ocf:pacemaker:controld) Started          clvm:i (ocf:heartbeat:clvm) Started
  3. To clean up the resource dlm, for example:

                          root #                                            crm                      resource cleanup dlm

Go along as follows to remove a cluster resources:

  1. Log in as root and start the crm interactive trounce:

  2. Run the following command to get a list of your resources:

                          crm(alive)#                                            resource                      status

    For case, the output tin look like this (whereas myIP is the relevant identifier of your resources):

    myIP    (ocf:IPaddr:heartbeat) ...
  3. Delete the resource with the relevant identifier (which implies a commit as well):

                          crm(live)#                                            configure                      delete                      YOUR_ID                    
  4. Commit the changes:

                          crm(alive)#                                            configure                      commit

Although resources are configured to automatically fail over (or migrate) to other nodes of the cluster if a hardware or software failure occurs, you can besides manually move a resource to another node using either Hawk2 or the command line.

Use the migrate command for this task. For example, to migrate the resource ipaddress1 to a cluster node named bob, use these commands:

                root #                                crm                resource                crm(live)resource#                                migrate                ipaddress1 bob

Tags are a way to refer to multiple resources at once, without creating any colocation or ordering relationship between them. This can be useful for grouping conceptually related resources. For example, if you have several resource related to a database, create a tag called databases and add all resources related to the database to this tag:

                root #                                crm                configure tag databases: db1 db2 db3

This allows y'all to start them all with a single command:

                root #                                crm                resource start databases

Similarly, you lot can stop them all too:

                root #                                crm                resources stop databases

The "health" status of a cluster or node tin can be displayed with and then called scripts . A script can perform different tasks—they are not targeted to health. However, for this subsection, we focus on how to get the health status.

To go all the details most the health control, use describe:

                root #                                crm                script describe health

It shows a description and a listing of all parameters and their default values. To execute a script, use run:

                root #                                crm                script run health

If you prefer to run only one step from the suite, the depict control lists all available steps in the Steps category.

For example, the post-obit control executes the first step of the wellness command. The output is stored in the health.json file for farther investigation:

                root #                                crm                script run health     statefile='health.json'

It is also possible to run the higher up commands with crm cluster health.

For boosted information regarding scripts, encounter http://crmsh.github.io/scripts/.

In case your cluster configuration contains sensitive information, such as passwords, information technology should be stored in local files. That mode, these parameters will never be logged or leaked in support reports.

Before using secret, amend run the bear witness command first to become an overview of all your resources:

              root #                            crm              configure show archaic mydb mysql \    params replication_user=admin ...

If you desire to gear up a password for the above mydb resource, use the post-obit commands:

              root #                            crm              resource surreptitious mydb set passwd linux INFO: syncing /var/lib/heartbeat/lrm/secrets/mydb/passwd to [your node listing]

You lot tin can get the saved password dorsum with:

              root #                            crm              resource hugger-mugger mydb evidence passwd linux

Notation that the parameters need to exist synchronized between nodes; the crm resources secret command will take care of that. Nosotros highly recommend to simply apply this command to manage underground parameters.

Investigating the cluster history is a complex chore. To simplify this task, crmsh contains the history control with its subcommands. It is assumed SSH is configured correctly.

Each cluster moves states, migrates resources, or starts important processes. All these deportment tin exist retrieved past subcommands of history.

Past default, all history commands await at the events of the terminal hr. To alter this time frame, use the limit subcommand. The syntax is:

              root #                            crm              history              crm(live)history#                            limit              FROM_TIME              [TO_TIME]

Some valid examples include:

limit 4:00pm , limit 16:00

Both commands mean the aforementioned, today at 4pm.

limit 2012/01/12 6pm

January 12th 2012 at 6pm

limit "Sun 5 20:46"

In the current yr of the electric current month at Dominicus the 5th at 8:46pm

Find more examples and how to create time frames at http://labix.org/python-dateutil.

The info subcommand shows all the parameters which are covered by the crm written report:

              crm(live)history#                            info              Source: alive Period: 2012-01-12 14:10:56 - end Nodes: alice Groups:  Resources:

To limit crm written report to sure parameters view the available options with the subcommand help.

To narrow down the level of detail, apply the subcommand detail with a level:

              crm(live)history#                            detail              ane

The higher the number, the more detailed your written report will be. Default is 0 (zero).

Afterward you have set higher up parameters, use log to show the log messages.

To display the final transition, use the following command:

              crm(live)history#                            transition              -1 INFO: fetching new logs, please look ...

This command fetches the logs and runs dotty (from the graphviz package) to show the transition graph. The shell opens the log file which you tin can browse with the and cursor keys.

If you lot practise non desire to open the transition graph, use the nograph pick:

              crm(alive)history#                            transition              -1 nograph
  • The crm homo folio.

  • Visit the upstream projection documentation at http://crmsh.github.io/documentation.

  • Come across Highly Bachelor NFS Storage with DRBD and Pacemaker for an exhaustive case.

Source: https://documentation.suse.com/sle-ha/12-SP4/html/SLE-HA-all/cha-ha-manual-config.html

Posted by: hardinghort1989.blogspot.com

0 Response to "Which Method Is Used To Change The Value Of The Preferred Node In A Cluster?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel