您的位置:首页 > 大数据 > 人工智能

Virtualization Support for RHEL High Availability and Resilient Storage Clusters

2016-02-26 10:51 771 查看


Virtualization Support for RHEL High Availability and Resilient Storage Clusters

已更新2015年十二月1日00:21 - 
English 


Overview

Red Hat Enterprise Linux (RHEL) High Availability and Resilient Storage clusters may be deployed within virtual machines running on various virtualization platforms, and may also manage and maintain high availability
of certain types of virtual machines running as resources. This document describes the virtualization platforms that are supported by Red Hat for use in these contexts.


Table of Contents

Environment
High Availability and Virtualization Use Cases
Virtualization Support
1. Support Matrix for VMs as Highly Available Resources
2. Support Matrix for Guest Clusters
Xen
KVM with 
libvirt

Red Hat Enterprise Virtualization (RHEV)
VMWare vSphere / ESXi
IBM z Systems
Hyper-V
Amazon EC2 or Other Hosted Cloud Providers

3. Considerations
Physical Host Mixing
Shared VM Cluster Storage
fence_scsi and iSCSI
vSphere 5.0 API Issues
fence_vmware_soap
 vs 
fence_vmware

Mixing Virtualization Technologies
fence_xvm
 with
Multiple Virtualization Hosts
Redundancy in Virtualization Hosts
Cluster Node Live Migration


Environment

Red Hat Enterprise Linux (RHEL) 5, 6, or 7 with the High Availability or Resilient Storage Add On
Red Hat Cluster Suite (RHCS) 4
A cluster deployment in which the targeted use case is to either:
Manage one or more virtual machines as a highly available resource, or
Operate one or more virtual machines as a node in the cluster.


High Availability and Virtualization Use Cases

There are two use cases for deployment of virtualization in conjunction with the RHEL High Availability or Resilient Storage Add-On: VMs as highly available resources and guest
clusters.
For detailed information, including general recommendations and best practices, see the official product documentation regarding Virtualization
and High Availability.


Virtualization Support

NOTE: In the below sections, any product, version, or configuration that is not listed can be assumed to be considered unsupported by Red Hat. Please contact Red Hat Global Support Services
for any questions or for assistance in evaluating the support status of a specific configuration.


1. Support Matrix for VMs as Highly Available Resources

The following VM configurations are supported as Highly Available resources managed by a RHEL High Availability cluster. Other virtual machine types or configurations may function, but Red Hat cannot guarantee they
will be free from issues and may be unable to assist in the event of any issues arising.
Hypervisor TypeGuest OS Type
RHEL 5 Update 3 or later Xen HostsAny guest OS fully supported by Xen
RHEL 5 Update 5 or later, RHEL 6, or RHEL 7 
libvirt
/KVM Hosts
Any guest OS fully supported by libvirt/KVM
NOTES:
In these configurations, the VM resources would be stored on shared block devices such as clustered LVM Volumes, or on disk images stored on shared file systems such as NFS, GFS, GFS2.


2. Support Matrix for Guest Clusters


Xen

The following Xen virtual machine configurations are supported by Red Hat for use as High Availability cluster nodes:
Hypervisor OSVM OSSupported Fencing Mechanisms
RHEL 5 Update 3 or laterRHEL 5 Update 3 or later
fence_xvm
 via multicast
NOTES:
Physical Host Mixing is not supported in clusters with member nodes that
run on Xen virtualization platforms.


KVM with libvirt

The following KVM/
libvirt
 virtual machine
configurations are supported by Red Hat for use as High Availability cluster nodes:
Hypervisor OSVM OSSupported Fencing Mechanisms
RHEL 5, 6, or 7RHEL 5, 6, or 7
fence_xvm
 via multicast
fence_virt
 via serial
fence_scsi
 with a compatible
iSCSI target
NOTES:
Physical Host Mixing is supported in clusters with member nodes that run
on the supported KVM/
libvirt
 virtualization platforms listed above.


Red Hat Enterprise Virtualization (RHEV)

The following RHEV virtual machine configurations are supported by Red Hat for use as High Availability cluster nodes:
RHEV-M VersionVM OSSupported Fencing Mechanisms
3.1 - 3.5RHEL 5 Update 9 or later
RHEL 6 Update 4 or later
RHEL 7
fence_rhevm

fence_scsi
 with a compatible
iSCSI target
3.0RHEL 5 Update 7 or later
RHEL 6 Update 2 or later
fence_rhevm

fence_scsi
 with a compatible
iSCSI target
2.2RHEL 5 Update 6 or later
RHEL 6 Update 1 or later
fence_scsi
 with a compatible
iSCSI target
NOTES:
Physical Host Mixing is not supported in clusters with member nodes that
run on RHEV virtualization platforms.


VMWare vSphere / ESXi

The following VMWare virtual machine configurations are supported by Red Hat for use as High Availability cluster nodes:
VMWare ProductVM OSShared VM Cluster StorageSupported Fencing Mechanisms
vSphere 6.0
vSphere Hypervisor (ESXi) 6.0
vCenter Server Appliance 6.0
RHEL 6 Update 7 or later
RHEL 7 Update 2 or later
Raw Device Mapping (Virtual Compatibility Mode)
Raw Device Mapping (Physical Compatibility Mode)
iSCSI LUN(S)
Virtual Disk File (VMDK) with the Multi-Writer option enabled
fence_vmware_soap

fence_scsi
 with acompatible
iSCSI target
vSphere 5.5
vSphere Hypervisor (ESXi) 5.5
vCenter Server Appliance 5.5
RHEL 5 Update 11 or later
RHEL 6 Update 6 or later
RHEL 7
Raw Device Mapping (Virtual Compatibility Mode)
Raw Device Mapping (Physical Compatibility Mode)
iSCSI LUN(S)
Virtual Disk File (VMDK) with the Multi-Writer option enabled with
RHEL 5 Update 9 or later
RHEL 6 Update 4 or later
RHEL 7

fence_vmware_soap

fence_scsi
 with acompatible
iSCSI target
vSphere 4.1
vSphere 5.0
vSphere Hypervisor (ESXi) 5.0
vSphere 5.1
vSphere Hypervisor (ESXi) 5.1
RHEL 5 Update 7 or later
RHEL 6 Update 2 or later
Raw Device Mapping (Virtual Compatibility Mode)
Raw Device Mapping (Physical Compatibility Mode)
iSCSI LUN(s)
Virtual Disk File (VMDK) with the Multi-Writer option enabled with
RHEL 5 Update 9 or later
RHEL 6 Update 4 or later

fence_vmware_soap

fence_scsi
 with acompatible
iSCSI target
NOTES:
Physical Host Mixing is supported in clusters with member nodes that run
on the supported VMWare platforms listed above.
Performing an upgrade of the VMware software (vSphere/vCenter/ESXi) while a managed cluster is running within guests is not supported by Red Hat. Problems arising in such scenarios must be reproduced outside of the upgrade context in order to receive
support from Red Hat.


IBM z Systems

Red Hat supports the RHEL High Availability and Resilient Storage Add-Ons on IBM z Systems VMs using RHEL 7 Update 2 and later, with certain
conditions


Hyper-V

Red Hat does not support the RHEL High Availability or Resilient Storage Add Ons on the Hyper-V platform.


Amazon EC2 or Other Hosted Cloud Providers

Red Hat does not support the RHEL High Availability or Resilient Storage Add Ons on these hosted cloud platforms.


3. Considerations


Physical Host Mixing

Physical Host Mixing is a High Availability cluster configuration in which some of the nodes in the cluster membership are running in virtual machines while others are running on bare metal hardware. In general
it is always advisable to keep the cluster configuration as homogenous as possible to prevent issues when migrating services or dealing with failure conditions. However, there are limited cases for which physical host mixing is acceptable. The following is
an example of such a use case:
A bare metal cluster with an even number of nodes (2, 4, 6) with a single cluster node as a virtual machine.  This cluster node is part of the membership and provides a tie-breaker function since if half of the nodes fail, this extra node can be
used to make sure the other half has quorum.  This configuration generally implies that the 'tie-breaker node' not run any services, since it's function is limited to quorum support.  The benefit to running this 'tie-breaker node' in a virtual machine is that
multiple of these nodes could coexist on the same physical host.  For example, you could have four clusters, each with 4 physical nodes (16 physical hosts total).  All of them could have a virtual machine running on a 17th physical host.  This can also be
used to replace the need for a quorum disk in 2 or 4 node clusters.
Note: Physical host mixing with a single virtualization environment is allowed for certain hypervisor technologies, as noted in the sections above. However, mixing different hypervisor types
is not allowed (see below).

Shared VM Cluster Storage

This category of storage referenced in the sections above refers to the types of block devices that can be exposed to the members of the VM cluster and used for shared highly available applications. These shared
block devices can be used for clustered LVM with 
clvmd
, HA-LVM, and/or GFS/GFS2,
where supported by the guest operating system.

fence_scsi and iSCSI

Use of 
fence_scsi
 with iSCSI storage
is limited to iSCSI servers that support SCSI 3 Persistent Reservations with the 
preempt-and-abort
 command.
Not all iSCSI servers support this functionality. Check with the storage vendor to ensure that your server is compliant with SCSI 3 Persistent Reservation support. Note that the 
scsi-target-utils
 (
tgtd
)
iSCSI server shipped with RHEL 6 does not support SCSI 3 Persistent Reservations, so it is not suitable for use with 
fence_scsi
.
The 
LIO
/
targetd
 iSCSI
server shipped in RHEL 7 does support SCSI 3 Persistent Reservations, and can be used with 
fence_scsi
.
fence_scsi
 is not compatible with virtual
SCSI devices presented to virtual machines on any of the platforms listed above. Xen and KVM virtualization
platforms emulate hardware SCSI controllers in a way that does not support SCSI SPC-3 Persistent Reservations.

vSphere 5.0 API Issues

Due to an incomplete WDSL schema provided in the initial release of vSphere v5.0, the fence_vmware_soap utility does not work on the default install. To use SOAP fencing apply
the VMware workaround described here. VMware intends to correct this in an update to 5.0, when that is available in a service pack, this knowledge base will be updated to reflect that.

fence_vmware_soap vs fence_vmware

In RHEL 5 Update 7 and later, RHEL 6 Update 2 and later, and RHEL 7, the proper fence agent to use for managing VM power states via the VMWare vCenter/ESXI API 
fence_vmware_soap
.
Prior to these releases, an older agent called 
fence_vmware
 was available, but upon
the release of
fence_vmware_soap
 support for 
fence_vmware
 was
discontinued. If one of these older RHEL releases must be used where 
fence_vmware_soap
 is
not available, Red Hat will provided limited support for configuring and managing 
fence_vmware
.
However, because support has transitioned to
fence_vmware_soap
 in later releases,
no bug fixes or enhancements will be made for 
fence_vmware
. It is strongly recommended
that a release with
fence_vmware_soap
 be used when possible.

Mixing Virtualization Technologies

Mixing different hypervisor technologies among cluster nodes is not supported. All nodes in a given cluster must reside on the same type of hypervisor. For example, a single cluster with nodes on both VMware vSphere
and RHEL KVM with 
libvirt
 would not be supported.

fence_xvm with Multiple Virtualization Hosts

Tracking of virtual machines (using the 
libvirt-qmf
 plugin
or the 
checkpoint
 plugin) is currently not tested or supported, hence, virtual machines
acting as cluster members must be statically assigned to given hosts for support. These plugins may work, but Red Hat cannot guarantee they will operate without issues, and Red Hat may require that any problems that arise while they are in use be reproduced
without the use of these components before providing support.

Redundancy in Virtualization Hosts

Placing all virtual machines of a single cluster on a single host is useful for deploying proof-of-concept clusters, but is not supported for production use, as a failure of the host will take out the entire cluster.

Cluster Node Live Migration

Live migration of VMs that are members of a RHEL cluster is
not supported
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息