Configuring and Managing Cluster Resources
2015-01-16 16:37
363 查看
Chapter 7. Configuring and Managing Cluster Resources (Command Line)¶
Contents7.1. crm Shell—Overview7.2. Configuring Global Cluster Options7.3. Configuring Cluster Resources7.4. Managing Cluster Resources7.5. Setting Passwords Independent of
cib.xml7.6. Retrieving History Information7.7. For More Information
Abstract
To configure and manage cluster resources, either use the graphical user interface (the Pacemaker GUI) or thecrm command line utility. For the GUI approach, refer to Chapter 5, Configuring
and Managing Cluster Resources (GUI).
This chapter introduces crm, the command line tool and covers an overview of this tool, how to use templates, and mainly configuring and managing cluster resources: creating basic and advanced types of resources
(groups and clones), configuring constraints, specifying failover nodes and failback nodes, configuring resource monitoring, starting, cleaning up or removing resources, and migrating resources manually.
User Privileges | |
---|---|
Sufficient privileges are necessary to manage a cluster. The crm command and its subcommands have to be run either as rootuser or as the CRM owner user (typically the user hacluster). However, the useroption allows you to run crm and its subcommands as a regular (unprivileged) user and to change its ID using sudo whenever necessary. For example, with the following command crm will use haclusteras the privileged user ID: crm options user hacluster Note that you need to set up /etc/sudoersso that sudo does not ask for a password. |
7.1. crm Shell—Overview¶
After installation you usually need the crm command only. This command has several subcommands which manage resources, CIBs, nodes, resource agents, and others. Run crmhelpto
get an overview of all available commands. It offers a thorough help system with embedded examples.
The crm command can be used in the following ways:
Directly. Concatenate all subcommands to crm, press Enter and you see the output immediately. For example, enter crm
help rato get information about the ra subcommand (resource agents).
As crm Shell Script. Use crm and its commands in a script. This can be done in two ways:
<span class="command"><strong>crm</strong></span> -f script.cli <span class="command"><strong>crm</strong></span> < script.cli
The script can contain any command from crm. For example:
# A small example <span class="command"><strong>status</strong></span><span class="command"><strong>node</strong></span> list
Any line starting with the hash symbol (#) is a comment and is ignored. If a line is too long, insert a backslash (\) at the end and continue in the next line.
Interactive as Internal Shell. Type crm to enter the internal shell. The prompt changes to
crm(live)#. With help you
can get an overview of the available subcommands. As the internal shell has different levels of subcommands, you can “enter” one by just typing this subcommand and press Enter.
For example, if you type resource you enter the resource management level. Your prompt changes to
crm(live)resource#. If you want to leave the internal shell, use the commands quit, bye,
or exit. If you need to go one level back, use up, end, or cd.
You can enter the level directly by typing crm and the respective subcommand(s) without any options and hit Enter.
The internal shell supports also tab completion for subcommands and resources. Type the beginning of a command, press →| and crm completes the respective object.
In addition to previously explained methods, the crm shell also supports synchronous command execution. Use the
-woption to activate it. If you have started crm without
-w,
you can enable it later with the user preference'swait set to
yes(options wait yes). If this option is enabled, crm waits
until the transition is finished. Whenever a transaction is started, dots are printed to indicate progress. Synchronous command execution is only applicable for commands like resource start.
Differentiate Between Management and Configuration Subcommands | |
---|---|
The crm tool has management capability (the subcommands resource and node) and can be used for configuration (cib, configure). |
7.1.1. Displaying Information about OCF Resource Agents¶
As you have to deal with resource agents in your cluster configuration all the time, the crm tool contains the racommand to get information about resource agentsand to manage them (for additional information, see alsoSection 4.2.2,
“Supported Resource Agent Classes”):
# <span class="command"><strong>crm</strong></span> ra crm(live)ra#
The command classes gives you a list of all classes and providers:
crm(live)ra# <span class="command"><strong>classes</strong></span> heartbeat lsb ocf / heartbeat linbit lvm2 ocfs2 pacemaker stonith
To get an overview about all available resource agents for a class (and provider) use the list command:
crm(live)ra# <span class="command"><strong>list</strong></span> ocf AoEtarget AudibleAlarm CTDB ClusterMon Delay Dummy EvmsSCC Evmsd Filesystem HealthCPU HealthSMART ICP IPaddr IPaddr2 IPsrcaddr IPv6addr LVM LinuxSCSI MailTo ManageRAID ManageVE Pure-FTPd Raid1 Route SAPDatabase SAPInstance SendArp ServeRAID ...
An overview about a resource agent can be viewed with info:
crm(live)ra# <span class="command"><strong>info</strong></span> ocf:drbd:linbit This resource agent manages a DRBD* resource as a master/slave resource. DRBD is a shared-nothing replicated storage device. (ocf:linbit:drbd) Master/Slave OCF Resource Agent for DRBD Parameters (* denotes required, [] the default): drbd_resource* (string): drbd resource name The name of the drbd resource from the drbd.conf file. drbdconf (string, [/etc/drbd.conf]): Path to drbd.conf Full path to the drbd.conf file. Operations' defaults (advisory minimum): start timeout=240 promote timeout=90 demote timeout=90 notify timeout=90 stop timeout=100 monitor_Slave_0 interval=20 timeout=20 start-delay=1m monitor_Master_0 interval=10 timeout=20 start-delay=1m
Leave the viewer by pressing Q. Find a configuration example at Appendix A, Example
of Setting Up a Simple Testing Resource.
Use crm Directly | |
---|---|
In the former example we used the internal shell of the crm command. However, you do not necessarily have to use it. You get the same results, if you add the respective subcommands to crm. For example, you can list all the OCF resource agents by entering crm ra list ocfin your shell. |
7.1.2. Using Configuration Templates¶
Configuration templates are ready-made cluster configurations for the crm shell. Do not confuse them with theresource templates (as described in Section 7.3.2,“Creating Resource Templates”). Those are templates for the cluster and not for the crm shell.
Configuration templates require minimum effort to be tailored to the particular user's needs. Whenever a template creates a configuration, warning messages give hints which can be edited later for further customization.
The following procedure shows how to create a simple yet functional Apache configuration:
Log in as
rootand start the crm tool:
# crm configure
Create a new configuration from a configuration template:
Switch to the template subcommand:
crm(live)configure# <span class="strong"><strong>template</strong></span>
List the available configuration templates:
crm(live)configure template# <span class="strong"><strong>list templates</strong></span> gfs2-base filesystem virtual-ip apache clvm ocfs2 gfs2
Decide which configuration template you need. As we need an Apache configuration, we choose the
apachetemplate:
crm(live)configure template# <span class="strong"><strong>new intranet apache</strong></span> INFO: pulling in template apache INFO: pulling in template virtual-ip
Define your parameters:
List the just created configuration:
crm(live)configure template# <span class="command"><strong>list</strong></span> intranet
Display the minimum of required changes which have to be filled out by you:
crm(live)configure template# <span class="command"><strong>show</strong></span> ERROR: 23: required parameter ip not set ERROR: 61: required parameter id not set ERROR: 65: required parameter configfile not set
Invoke your preferred text editor and fill out all lines that have been displayed as errors in Step
3.b:
crm(live)configure template# <span class="command"><strong>edit</strong></span>
Show the configuration and check whether it is valid (bold text depends on the configuration you have entered in Step
3.c):
crm(live)configure template# <span class="command"><strong>show</strong></span> primitive virtual-ip ocf:heartbeat:IPaddr \ params ip=<span class="strong"><strong>"192.168.1.101"</strong></span> primitive apache ocf:heartbeat:apache \ params configfile=<span class="strong"><strong>"/etc/apache2/httpd.conf"</strong></span> monitor apache 120s:60s group <span class="strong"><strong>intranet</strong></span> \ apache virtual-ip
Apply the configuration:
crm(live)configure template# <span class="command"><strong>apply</strong></span> crm(live)configure# <span class="command"><strong>cd ..</strong></span> crm(live)configure# <span class="command"><strong>show</strong></span>
Submit your changes to the CIB:
crm(live)configure# <span class="strong"><strong>commit</strong></span>
It is possible to simplify the commands even more, if you know the details. The above procedure can be summarized with the following command on the shell:
crm configure template \ new intranet apache params \ configfile="/etc/apache2/httpd.conf" ip="192.168.1.101"
If you are inside your internal crm shell, use the following command:
crm(live)configure template# <span class="command"><strong>new</strong></span> intranet apache params \ configfile="/etc/apache2/httpd.conf" ip="192.168.1.101"
However, the previous command only creates its configuration from the configuration template. It does not apply nor commit it to the CIB.
7.1.3. Testing with Shadow Configuration¶
A shadow configuration is used to test different configuration scenarios. If you have created several shadow configurations, you can test them one by one to see the effects of your changes.The usual process looks like this:
Log in as
rootand start the crm tool:
# crm configure
Create a new shadow configuration:
crm(live)configure# <span class="command"><strong>cib</strong></span> new myNewConfig INFO: myNewConfig shadow CIB created
If you want to copy the current live configuration into your shadow configuration, use the following command, otherwise skip this step:
crm(myNewConfig)# <span class="command"><strong>cib</strong></span> reset myNewConfig
The previous command makes it easier to modify any existing resources later.
Make your changes as usual. After you have created the shadow configuration, all changes go there. To save all your changes, use the following command:
crm(myNewConfig)# <span class="command"><strong>commit</strong></span>
If you need the live cluster configuration again, switch back with the following command:
crm(myNewConfig)configure# <span class="command"><strong>cib</strong></span> use live crm(live)#
7.1.4. Debugging Your Configuration Changes¶
Before loading your configuration changes back into the cluster, it is recommended to review your changes withptest. The ptest can show a diagram of actions thatwill be induced by committing the changes. You need the
graphvizpackage to display the diagrams. The following example is a transcript, adding a monitor operation:
# <span class="command"><strong>crm</strong></span> configure crm(live)configure# <span class="command"><strong>show</strong></span> fence-node2 primitive fence-node2 stonith:apcsmart \ params hostlist="node2" crm(live)configure# <span class="command"><strong>monitor</strong></span> fence-node2 120m:60s crm(live)configure# <span class="command"><strong>show</strong></span> changed primitive fence-node2 stonith:apcsmart \ params hostlist="node2" \ op monitor interval="120m" timeout="60s" crm(live)configure# <span class="strong"><strong>ptest</strong></span> crm(live)configure# commit
7.2. Configuring Global Cluster Options¶
Global cluster options control how the cluster behaves when confronted with certain situations. The predefined values can be kept in most cases. However, to make key functions of your cluster work correctly, you need to adjust the following parameters afterbasic cluster setup:
Procedure 7.1. Modifying Global Cluster Options With [b]crm[/b]
Log in as
rootand start the crm tool:
# crm configure
Use the following commands to set the options for a two-node clusters only:
crm(live)configure# <span class="command"><strong>property</strong></span> no-quorum-policy=ignore crm(live)configure# <span class="command"><strong>property</strong></span> stonith-enabled=false
Show your changes:
crm(live)configure# <span class="command"><strong>show</strong></span> property $id="cib-bootstrap-options" \ dc-version="1.1.1-530add2a3721a0ecccb24660a97dbfdaa3e68f51" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false"
Commit your changes and exit:
crm(live)configure# <span class="command"><strong>commit</strong></span> crm(live)configure# <span class="command"><strong>exit</strong></span>
7.3. Configuring Cluster Resources¶
As a cluster administrator, you need to create cluster resources for every resource or application you run on servers in your cluster. Cluster resources can include Web sites, e-mail servers, databases, file systems, virtual machines, and any other server-basedapplications or services you want to make available to users at all times.
For an overview of resource types you can create, refer to Section 4.2.3,
“Types of Resources”.
7.3.1. Creating Cluster Resources¶
There are three types of RAs (Resource Agents) available with the cluster (for background information, seeSection 4.2.2,“Supported Resource Agent Classes”). To create a cluster resource use the crm tool. To add a new resource to the cluster, proceed as follows:
Log in as
rootand start the crm tool:
# crm configure
Configure a primitive IP address:
crm(live)configure# <span class="command"><strong>primitive</strong></span> myIP ocf:heartbeat:IPaddr \ params ip=127.0.0.99 op monitor interval=60s
The previous command configures a “primitive” with the name
myIP. You need to choose a class (here
ocf), provider (
heartbeat),
and type (
IPaddr). Furthermore, this primitive expects other parameters like the IP address. Change the address to your setup.
Display and review the changes you have made:
crm(live)configure# <span class="command"><strong>show</strong></span>
Commit your changes to take effect:
crm(live)configure# <span class="command"><strong>commit</strong></span>
7.3.2. Creating Resource Templates¶
If you want to create several resources with similar configurations, a resource template simplifies the task. See also Section 4.4.3,“Resource Templates and Constraints” for some basic background information. Do not confuse them with the “normal” templates from Section 7.1.2,
“Using Configuration Templates”. Use thersc_template command to get familiar with the syntax:
# crm configure rsc_template usage: rsc_template <name> [<class>:[<provider>:]]<type> [params <param>=<value> [<param>=<value>...]] [meta <attribute>=<value> [<attribute>=<value>...]] [utilization <attribute>=<value> [<attribute>=<value>...]] [operations id_spec [op op_type [<attribute>=<value>...] ...]]
For example, the following command creates a new resource template with the name
BigVMderived from the
ocf:heartbeat:Xenresource and some default values and operations:
crm(live)configure# rsc_template BigVM ocf:heartbeat:Xen \ params allow_mem_management="true" \ op monitor timeout=60s interval=15s \ op stop timeout=10m \ op start timeout=10m
Once you defined the new resource template, you can use it in primitives or reference it in order, colocation, or rsc_ticket constraints. To reference the resource template, use the
@sign:
crm(live)configure# primitive MyVM1 @BigVM \ params xmfile="/etc/xen/shared-vm/MyVM1" name="MyVM1"
The new primitive MyVM1 is going to inherit everything from the BigVM resource templates. For example, the equivalent of the above two would be:
crm(live)configure# primitive MyVM1 ocf:heartbeat:Xen \ params xmfile="/etc/xen/shared-vm/MyVM1" name="MyVM1" params allow_mem_management="true" \ op monitor timeout=60s interval=15s \ op stop timeout=10m \ op start timeout=10m
If you want to overwrite some options or operations, add them to your (primitive) definition. For instance, the following new primitive MyVM2 doubles the timout for monitor operations but leaves others untouched:
crm(live)configure# primitive MyVM2 @BigVM \ params xmfile="/etc/xen/shared-vm/MyVM2" name="MyVM2" \ op monitor timeout=120s interval=30s
A resource template may be referenced in constraints to stand for all primitives which are derived from that template. This helps to produce a more concise and clear cluster configuration. Resource template references are allowed in all constraints except
location constraints. Colocation constraints may not contain more than one template reference.
7.3.3. Creating a STONITH Resource¶
From the crm perspective, a STONITH device is just another resource. To create a STONITH resource, proceed as follows:Log in as
rootand start the crm tool:
# crm configure
Get a list of all STONITH types with the following command:
crm(live)# <span class="command"><strong>ra</strong></span> list stonith apcmaster apcmastersnmp apcsmart baytech bladehpi cyclades drac3 external/drac5 external/dracmc-telnet external/hetzner external/hmchttp external/ibmrsa external/ibmrsa-telnet external/ipmi external/ippower9258 external/kdumpcheck external/libvirt external/nut external/rackpdu external/riloe external/sbd external/vcenter external/vmware external/xen0 external/xen0-ha fence_legacy ibmhmc ipmilan meatware nw_rpc100s rcd_serial rps10 suicide wti_mpc wti_nps
Choose a STONITH type from the above list and view the list of possible options. Use the following command:
crm(live)# <span class="command"><strong>ra</strong></span> info stonith:external/ipmi IPMI STONITH external device (stonith:external/ipmi) ipmitool based power management. Apparently, the power off method of ipmitool is intercepted by ACPI which then makes a regular shutdown. If case of a split brain on a two-node it may happen that no node survives. For two-node clusters use only the reset method. Parameters (* denotes required, [] the default): hostname (string): Hostname The name of the host to be managed by this STONITH device. ...
Create the STONITH resource with the
stonithclass, the type you have chosen in Step
3, and the respective parameters if needed, for example:
crm(live)# <span class="command"><strong>configure</strong></span> crm(live)configure# <span class="command"><strong>primitive</strong></span> my-stonith stonith:external/ipmi \ params hostname="node1" ipaddr="192.168.1.221" \ userid="admin" passwd="secret" \ op monitor interval=60m timeout=120s
7.3.4. Configuring Resource Constraints¶
Having all the resources configured is only one part of the job. Even if the cluster knows all needed resources, it might still not be able to handle them correctly. For example, try not to mount the file system on the slave node of drbd (in fact, this wouldfail with drbd). Define constraints to make these kind of information available to the cluster.
For more information about constraints, see Section 4.4,
“Resource Constraints”.
7.3.4.1. Locational Constraints¶
This type of constraint may be added multiple times for each resource. Alllocationconstraints are evaluated for a given resource. A simple example that expresses a preference to run the resource
fs1on
the node with the name
earthto 100 would be the following:
crm(live)configure# <span class="command"><strong>location</strong></span> fs1-loc fs1 100: earth
Another example is a location with pingd:
crm(live)configure# <span class="command"><strong>primitive</strong></span> pingd pingd \ params name=pingd dampen=5s multiplier=100 host_list="r1 r2" crm(live)configure# <span class="command"><strong>location</strong></span> node_pref internal_www \ rule 50: #uname eq node1 \ rule pingd: defined pingd
7.3.4.2. Colocational Constraints¶
The colocation command is used to define what resources should run on the same or on different hosts.It is only possible to set a score of either +inf or -inf, defining resources that must always or must never run on the same node. It is also possible to use non-infinite scores. In that case the colocation is called advisory and
the cluster may decide not to follow them in favor of not stopping other resources if there is a conflict.
For example, to run the resources with the IDs
filesystem_resourceand
nfs_groupalways on the same host, use the following constraint:
crm(live)configure# <span class="command"><strong>colocation</strong></span> nfs_on_filesystem inf: nfs_group filesystem_resource
For a master slave configuration, it is necessary to know if the current node is a master in addition to running the resource locally.
7.3.4.3. Ordering Constraints¶
Sometimes it is necessary to provide an order of resource actions or operations. For example, you cannot mount a file system before the device is available to a system. Ordering constraints can be used to start or stop a service right before or after a differentresource meets a special condition, such as being started, stopped, or promoted to master. Use the following commands in the crm shell to configure an ordering constraint:
crm(live)configure# <span class="command"><strong>order</strong></span> nfs_after_filesystem mandatory: filesystem_resource nfs_group
7.3.4.4. Constraints for the Example Configuration¶
The example used for this chapter would not work without additional constraints. It is essential that all resources run on the same machine as the master of the drbd resource. The drbd resource must be master before any other resource starts. Trying to mountthe DRBD device when it is not the master simply fails. The following constraints must be fulfilled:
The file system must always be on the same node as the master of the DRBD resource.
crm(live)configure# <span class="command"><strong>colocation</strong></span> filesystem_on_master inf: \ filesystem_resource drbd_resource:Master
The NFS server as well as the IP address must be on the same node as the file system.
crm(live)configure# <span class="command"><strong>colocation</strong></span> nfs_with_fs inf: \ nfs_group filesystem_resource
The NFS server as well as the IP address start after the file system is mounted:
crm(live)configure# <span class="command"><strong>order</strong></span> nfs_second mandatory: \ filesystem_resource:start nfs_group
The file system must be mounted on a node after the DRBD resource is promoted to master on this node.
crm(live)configure# <span class="command"><strong>order</strong></span> drbd_first inf: \ drbd_resource:promote filesystem_resource:start
7.3.5. Specifying Resource Failover Nodes¶
To determine a resource failover, use the meta attribute migration-threshold. In case failcount exceeds migration-threshold on all nodes, the resource will remain stopped. For example:crm(live)configure# <span class="command"><strong>location</strong></span> r1-node1 r1 100: node1
Normally, r1 prefers to run on node1. If it fails there, migration-threshold is checked and compared to the failcount. If failcount >= migration-threshold then it is migrated to the node with the next best preference.
Start failures set the failcount to inf depend on the
start-failure-is-fataloption. Stop failures cause fencing. If there is no STONITH defined, the resource will not migrate at all.
For an overview, refer to Section 4.4.4,
“Failover Nodes”.
7.3.6. Specifying Resource Failback Nodes (Resource Stickiness)¶
A resource might fail back to its original node when that node is back online and in the cluster. If you want to prevent a resource from failing back to the node it was running on prior to failover, or if you want to specify a different node for the resourceto fail back to, you must change its resource stickiness value. You can either specify resource stickiness when you are creating a resource, or afterwards.
For an overview, refer to Section 4.4.5,
“Failback Nodes”.
7.3.7. Configuring Placement of Resources Based on
Load Impact¶
Some resources may have specific capacity requirements such as minimum amount of memory. Otherwise, they may fail to start completely or run with degraded performance.To take this into account, the High Availability Extension allows you to specify the following parameters:
The capacity a certain node provides.
The capacity a certain resource requires.
An overall strategy for placement of resources.
For detailed background information about the parameters and a configuration example, refer to Section 4.4.6,
“Placing Resources Based on Their Load Impact”.
To configure the resource's requirements and the capacity a node provides, use utilization attributes as described in Procedure 5.10,
“Adding Or Modifying Utilization Attributes”. You can name the utilization attributes according to your preferences and define as many name/value pairs as your configuration needs.
In the following example, we assume that you already have a basic configuration of cluster nodes and resources and now additionally want to configure the capacities a certain node provides and the capacity a certain resource requires.
Procedure 7.2. Adding Or Modifying Utilization Attributes With [b]crm[/b]
Log in as
rootand start the crm tool:
# crm configure
To specify the capacity a node provides, use the following command and replace the placeholder
NODE_1with the name of your node:
crm(live)configure# <span class="command"><strong>node</strong></span> <span class="replaceable" style="font-style: italic;">NODE_1</span> utilization memory=16384 cpu=8
With these values,
NODE_1would be assumed to provide 16GB of memory and 8 CPU cores to resources.
To specify the capacity a resource requires, use:
crm(live)configure# <span class="command"><strong>primitive</strong></span> xen1 ocf:heartbeat:Xen ... \ utilization memory=4096 cpu=4
This would make the resource consume 4096 of those memory units from nodeA, and 4 of the cpu units.
Configure the placement strategy with the property command:
crm(live)configure# <span class="command"><strong>property</strong></span> ...
Four values are available for the placement strategy:
property
placement-strategy=default
Utilization values are not taken into account at all, per default. Resources are allocated according to location scoring. If scores are equal, resources are evenly distributed across nodes.
property
placement-strategy=utilization
Utilization values are taken into account when deciding whether a node is considered eligible if it has sufficient free capacity to satisfy the resource's requirements. However, load-balancing is still done based on the number of resources allocated to a
node.
property
placement-strategy=minimal
Utilization values are taken into account when deciding whether a node is eligible to serve a resource; an attempt is made to concentrate the resources on as few nodes as possible, thereby enabling possible power savings on the remaining nodes.
property
placement-strategy=balanced
Utilization values are taken into account when deciding whether a node is eligible to serve a resource; an attempt is made to spread the resources evenly, optimizing resource performance.
The placing strategies are best-effort, and do not yet utilize complex heuristic solvers to always reach an optimum allocation result. Ensure that resource priorities are properly set so that your most important resources are scheduled first.
Commit your changes before leaving the crm shell:
crm(live)configure# <span class="command"><strong>commit</strong></span>
The following example demonstrates a three node cluster of equal nodes, with 4 virtual machines:
crm(live)configure# <span class="command"><strong>node</strong></span> node1 utilization memory="4000" crm(live)configure# <span class="command"><strong>node</strong></span> node2 utilization memory="4000" crm(live)configure# <span class="command"><strong>node</strong></span> node3 utilization memory="4000" crm(live)configure# <span class="command"><strong>primitive</strong></span> xenA ocf:heartbeat:Xen \ utilization memory="3500" meta priority="10" crm(live)configure# <span class="command"><strong>primitive</strong></span> xenB ocf:heartbeat:Xen \ utilization memory="2000" meta priority="1" crm(live)configure# <span class="command"><strong>primitive</strong></span> xenC ocf:heartbeat:Xen \ utilization memory="2000" meta priority="1" crm(live)configure# <span class="command"><strong>primitive</strong></span> xenD ocf:heartbeat:Xen \ utilization memory="1000" meta priority="5" crm(live)configure# <span class="command"><strong>property</strong></span> placement-strategy="minimal"
With all three nodes up, xenA will be placed onto a node first, followed by xenD. xenB and xenC would either be allocated together or one of them with xenD.
If one node failed, too little total memory would be available to host them all. xenA would be ensured to be allocated, as would xenD; however, only one of xenB or xenC could still be placed, and since their priority is equal, the result is not defined yet.
To resolve this ambiguity as well, you would need to set a higher priority for either one.
7.3.8. Configuring Resource Monitoring¶
To monitor a resource, there are two possibilities: either define a monitor operation with the op keyword or use the monitor command. The following example configuresan Apache resource and monitors it every 60 seconds with the
opkeyword:
crm(live)configure# <span class="command"><strong>primitive</strong></span> apache apache \ params ... \ <span class="emphasis"><em>op monitor interval=60s timeout=30s</em></span>
The same can be done with:
crm(live)configure# <span class="command"><strong>primitive</strong></span> apache apache \ params ... crm(live)configure# <span class="command"><strong>monitor</strong></span> apache 60s:30s
For an overview, refer to Section 4.3,
“Resource Monitoring”.
7.3.9. Configuring a Cluster Resource Group¶
One of the most common elements of a cluster is a set of resources that needs to be located together. Start sequentially and stop in the reverse order. To simplify this configuration we support the concept of groups. The following example creates two primitives(an IP address and an e-mail resource):
Run the crm command as system administrator. The prompt changes to
crm(live).
Configure the primitives:
crm(live)# <span class="command"><strong>configure</strong></span> crm(live)configure# <span class="command"><strong>primitive</strong></span> Public-IP ocf:IPaddr:heartbeat \ params ip=1.2.3.4 crm(live)configure# <span class="command"><strong>primitive</strong></span> Email lsb:exim
Group the primitives with their relevant identifiers in the correct order:
crm(live)configure# <span class="command"><strong>group</strong></span> shortcut Public-IP Email
For an overview, refer to Section 4.2.5.1,
“Groups”.
7.3.10. Configuring a Clone Resource¶
Clones were initially conceived as a convenient way to start N instances of an IP resource and have them distributed throughout the cluster for load balancing. They have turned out to quite useful for a number of other purposes, including integrating withDLM, the fencing subsystem and OCFS2. You can clone any resource, provided the resource agent supports it.
Learn more about cloned resources in Section 4.2.5.2,
“Clones”.
7.3.10.1. Creating Anonymous Clone Resources¶
To create an anonymous clone resource, first create a primitive resource and then refer to it with the clonecommand. Do the following:Log in as
rootand start the crm tool:
# crm configure
Configure the primitive, for example:
crm(live)configure# <span class="command"><strong>primitive</strong></span> Apache lsb:apache
Clone the primitive:
crm(live)configure# clone apache-clone Apache
7.3.10.2. Creating Stateful/Multi-State Clone Resources¶
To create an stateful clone resource, first create a primitive resource and then the master/slave resource. The master/slave resource must support at least promote and demote operations.Log in as
rootand start the crm tool:
# crm configure
Configure the primitive. Change the intervals if needed:
crm(live)configure# <span class="command"><strong>primitive</strong></span> myRSC ocf:myCorp:myAppl \ op monitor interval=60 \ op monitor interval=61 role=Master
Create the master slave resource:
crm(live)configure# <span class="command"><strong>ms</strong></span> myRSC-clone myRSC
7.4. Managing Cluster Resources¶
Apart from the possibility to configure your cluster resources, the crm tool also allows you to manage existing resources. The following subsections gives you an overview.7.4.1. Starting a New Cluster Resource¶
To start a new cluster resource you need the respective identifier. Proceed as follows:Log in as
rootand start the crm tool:
# crm configure
Switch to the resource level:
crm(live)# <span class="command"><strong>resource</strong></span>
Start the resource with start and press the →| key to show all known resources:
crm(live)resource# <span class="command"><strong>start</strong></span> start <span class="replaceable" style="font-style: italic;">ID</span>
7.4.2. Cleaning Up Resources¶
A resource will be automatically restarted if it fails, but each failure raises the resource's failcount. If amigration-thresholdhas been set for that resource, the node will no longer be allowed to run the resource as soon
as the number of failures has reached the migration threshold.
Open a shell and log in as user
root.
Get a list of all your resources:
crm resource list ... Resource Group: dlm-clvm:1 dlm:1 (ocf::pacemaker:controld) Started clvm:1 (ocf::lvm2:clvmd) Started cmirrord:1 (ocf::lvm2:cmirrord) Started
Remove the resource:
crm resource cleanup dlm-clvm
For example, if you want to stop the DLM resource, from the
dlm-clvmresource group, replace
RSCwith
dlm.
7.4.3. Removing a Cluster Resource¶
Proceed as follows to remove a cluster resource:Log in as
rootand start the crm tool:
# crm configure
Run the following command to get a list of your resources:
crm(live)# <span class="command"><strong>resource</strong></span> status
For example, the output can look like this (whereas myIP is the relevant identifier of your resource):
myIP (ocf::IPaddr:heartbeat) ...
Delete the resource with the relevant identifier (which implies a commit too):
crm(live)# <span class="command"><strong>configure</strong></span> delete <span class="replaceable" style="font-style: italic;">YOUR_ID</span>
Commit the changes:
crm(live)# <span class="command"><strong>configure</strong></span> commit
7.4.4. Migrating a Cluster Resource¶
Although resources are configured to automatically fail over (or migrate) to other nodes of the cluster in the event of a hardware or software failure, you can also manually move a resource to another node in the cluster using either the Pacemaker GUI orthe command line.
Use the migrate command for this task. For example, to migrate the resource
ipaddress1to a cluster node named
node2, use these commands:
# <span class="command"><strong>crm</strong></span> resource crm(live)resource# <span class="command"><strong>migrate</strong></span> ipaddress1 node2
7.5. Setting Passwords Independent of cib.xml
¶
In case your cluster configuration contains sensitive information, such as passwords, it should be stored in local files. That way, these parameters will never be logged or leaked in support reports.Before using secret, better run the show command first to get an overview of all your resources:
# crm configure show primitive mydb ocf:heartbeat:mysql \ params replication_user=admin ...
If you want to set a password for the above
mydbresource, use the following commands:
# crm resource secret mydb set passwd linux INFO: syncing /var/lib/heartbeat/lrm/secrets/mydb/passwd to [your node list]
You can get the saved password back with:
# crm resource secret mydb show passwd linux
Note that the parameters need to be synchronized between nodes; the crm resource secret command will take care of that. We highly recommend to only use this command to manage secret parameters.
7.6. Retrieving History Information¶
Investigating the cluster history is a complex task. To simplify this task, the crm shell contains the historycommand with its subcommands. It is assumed SSH is configured correctly.Each cluster moves states, migrates resources, or starts important processes. All these actions can be retrieved by subcommands of history. Alternatively, use Hawk as explained in Procedure 6.23,
“Viewing Transitions with the History Explorer”.
By default, all history commands look at the events of the last hour. To change this time frame, use the limitsubcommand. The syntax is:
# crm history crm(live)history# <span class="command"><strong>limit</strong></span><span class="replaceable" style="font-style: italic;">FROM_TIME</span> [<span class="replaceable" style="font-style: italic;">TO_TIME</span>]
Some valid examples include:
limit
4:00pm, limit
16:00
Both commands mean the same, today at 4pm.
limit
2012/01/12 6pm
January 12th 2012 at 6pm
limit
"Sun 5 20:46"
In the current year of the current month at Sunday the 5th at 8:46pm
Find more examples and how to create time frames at http://labix.org/python-dateutil.
The info subcommand shows all the parameters which are covered by the the hb_report:
crm(live)history# <span class="command"><strong>info</strong></span> Source: live Period: 2012-01-12 14:10:56 - end Nodes: earth Groups: Resources:
To limit hb_report to certain parameters view the available options with the subcommand help.
To narrow down the level of detail, use the subcommand detail with a level:
crm(live)history# <span class="command"><strong>detail</strong></span> 2
The higher the number, the more detailed your report will be. Default is
0(zero).
After you have set above parameters, use log to show the log messages.
To display the last transition, use the following command:
crm(live)history# <span class="command"><strong>transition</strong></span> -1 INFO: fetching new logs, please wait ...
This command fetches the logs and runs dotty (from the
graphvizpackage) to show the transition graph. The shell opens the log file which you can browse with the ↓ and ↑ cursor
keys.
If you do not want to open the transition graph, use the
nographoption:
crm(live)history# <span class="command"><strong>transition</strong></span> -1 nograph
7.7. For More Information¶
The crm man page.See Highly Available NFS Storage with DRBD and Pacemaker (↑Highly Available NFS Storage with DRBD and Pacemaker) for an exhaustive example.
摘自:http://doc.opensuse.org/products/draft/SLE-HA/SLE-ha-guide_sd_draft/book.sleha.html
相关文章推荐
- Configuring and Managing Cluster Resources (Command Line)
- Configuring and Managing Cluster Resources
- Chapter 7. Configuring and Managing Cluster Resources (Command Line)
- 13 - Managing Password Security and Resources
- Managing password security and resources
- Tools for Installing, Configuring, and Managing Oracle RAC
- spark运行报错:check your cluster UI to ensure that workers are registered and have sufficient resources
- Managing Projects in Human Resources: Training and Developement
- Spark之submit任务时的Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
- Calling sequence for creating and configuring a table view
- studio混淆apk打包错误:app:transformClassesAndResourcesWithProguardForRelease'. > java.ioe异常
- Pooled and Cluster Tables
- 解决AS混淆时出现的错误Execution failed for task ':app:transformClassesAndResourcesWithProguardForRelease'
- How to detect and avoid memory and resources leaks in .NET applications
- Known Issues: Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip (Doc ID 1
- Unable to find the report in the manifest resources. Please build the project, and try again.
- Installing/Configuring PuTTy and Xming
- Managing Your Advisor -- Creativity and grad school survival advice from Professor Nick Feamster
- Build Your Own Oracle RAC 11g Cluster on Oracle Linux and iSCSI
- Protecting resources in iPhone and iPad apps