linux配置软raid0过程(A)
2011-11-28 18:53
435 查看
事先在vnware下添加几块虚拟磁盘,
进入系统后:
fdisk -l 查看所有都磁盘
对新加对磁盘分区:
fdisk /dev/sdb
1. n 新增分区
2. p 选择主分区
3. t 选择系统分区模式
4. fd 选择linux raid
5 w 保存
对/devsdc同样对操作
这样,raid分区完毕,接下来创建raid
mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb /dev/sdc创建mdadm对config文件:
cp /usr/share/doc/mdadm-2.5.4/mdadm.conf-example /etc/mdadm.conf修改它:
echo DEVICE /dev/sd[bc]1 >>/etc/mdadm.confmdadm -Ds >> /etc/mdadm.conf此步骤完成接着使用lvm管理创建pvpvcreate /dev/md1查看: pvdisplay创建vgvgcreate VgOnRaid /dev/md1 /dev/md2如果还有别的块设备如上面的/dev/md2的话,这里的/dev/md1就是2个磁盘做raid0查看: vgdisplay创建lvlvcreate -l xxxx
VgOnRaid -n LvOnRaid查看: lvdisplay
mount -t ext3 /dev写入fstab
-----------------------------------------------------------------
[root@TSM54-Test ~]# fdisk -lDisk /dev/sda: 15.0 GB, 15032385536 bytes255 heads, 63 sectors/track, 1827 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sda1 * 1 13 104391 83 Linux/dev/sda2 14 1827 14570955 8e Linux LVMDisk /dev/sdb: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id SystemDisk /dev/sdc: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id SystemDisk /dev/sdd: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id SystemDisk /dev/sde: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System一.创建Soft RAID
1.创建raid分区
[root@TSM54-Test ~]# Hex code (type L to list codes): Command (m for help): p Disk /dev/sdb: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 1 1044 8385898+ fd Linux raid autodetect Command (m for help): wThe partition table has been altered! Calling ioctl() to re-read partition table.Syncing disks. |
[root@TSM54-Test ~]# fdisk -lDisk /dev/sda: 15.0 GB, 15032385536 bytes255 heads, 63 sectors/track, 1827 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sda1 * 1 13 104391 83 Linux/dev/sda2 14 1827 14570955 8e Linux LVMDisk /dev/sdb: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sdb1 1 1044 8385898+ fd Linux raid autodetectDisk /dev/sdc: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System /dev/sdc1 1 1044 8385898+ fd Linux raid autodetectDisk /dev/sdd: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System/dev/sdd1 1 1044 8385898+ fd Linux raid autodetectDisk /dev/sde: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id System |
mdadm可以支持LINEAR、RAID0 (striping)、 RAID1(mirroring)、 RAID4、RAID5、RAID6和MULTIPATH的阵列模式。
命令格式:
mdadm --create device -chunk=X --level=Y --raid-devices=Z devices
[root@TSM54-Test ~]# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sddmdadm: array /dev/md0 started. |
3.配置文件
[root@TSM54-Test ~]#cp /usr/share/doc/mdadm-2.5.4/mdadm.conf-example /etc/mdadm.conf[root@TSM54-Test ~]#echo DEVICE /dev/sd[bcd]1 >>/etc/mdadm.conf[root@TSM54-Test ~]#mdadm -Ds >>/etc/mdadm.conf |
接下来,只要把/dev/md0作为一个单独的磁盘设备进行操作就可以:
[root@TSM54-Test ~]# mkfs.ext3 /dev/md0mke2fs 1.39 (29-May-2006)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)2097152 inodes, 4194272 blocks209713 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=0128 block groups32768 blocks per group, 32768 fragments per group16384 inodes per groupSuperblock backups stored on blocks:32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,4096000Writing inode tables: doneCreating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 36 mounts |
更改/etc/fstab文件,添加一行
/dev/md0 /mnt/software ext3 defaults 0 0 |
ASSEMBLE MODE :madam --assemble md-device options-and-component-devices
mdadm --assembel --scan md-devices-and-options
mdamd --assembel --scan options
BUILD MODE: mdadm --build device --chunk=X --level=Y --raid-devices=Z devices
CREATE MODE: mdadm --create device --chunk=X --level=Y --raid-devices=Z devices
MANAGE MODE: mdadm device options devices
MISC MODE: mdadm options ... devices ...
MONITOR MODE: mdadm --monitor options... devices...
GROW MODE:1.查看
MISC模式
#mdadm --detail /dev/md0#mdadm -D /dev/md0 |
MISC模式
#mdadm -S /dev/md0 |
ASSEMBLE模式
#mdadm -A /dev/md0 /dev/sd[bcd]1启动指定的RAID,可以理解为将一个raid重新装配到系统中。如果在前面已经配置了/etc/mdadm.conf文件,可以使用:#mdadm -As /dev/md0 |
mdadm可以在Manage模式下,对运行中的阵列进行添加及删除磁盘。常用于标识failed磁盘,增加spare(冗余)磁盘,以及替换磁盘等。
[root@TSM54-Test ~]# mdadm /dev/md0 --fail /dev/sdd --remove /dev/sddmdadm: set /dev/sdd faulty in /dev/md0mdadm: hot removed /dev/sdd[root@TSM54-Test ~]# mdadm -D /dev/md0/dev/md0:Version : 00.90.03Creation Time : Fri Aug 1 21:35:31 2008Raid Level : raid5Array Size : 16777088 (16.00 GiB 17.18 GB)Device Size : 8388544 (8.00 GiB 8.59 GB)Raid Devices : 3Total Devices : 2Preferred Minor : 0Persistence : Superblock is persistentUpdate Time : Fri Aug 1 23:34:12 2008State : clean, degradedActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Layout : left-symmetricChunk Size : 64KUUID : 28a22990:eac5c231:3fe907f1:1145e264Events : 0.6Number Major Minor RaidDevice State0 8 16 0 active sync /dev/sdb1 8 32 1 active sync /dev/sdc2 0 0 2 removed[root@TSM54-Test ~]# mdadm /dev/md0 --add /dev/sddmdadm: re-added /dev/sdd[root@TSM54-Test ~]# mdadm -D /dev/md0/dev/md0:Version : 00.90.03Creation Time : Fri Aug 1 21:35:31 2008Raid Level : raid5Array Size : 16777088 (16.00 GiB 17.18 GB)Device Size : 8388544 (8.00 GiB 8.59 GB)Raid Devices : 3Total Devices : 3Preferred Minor : 0Persistence : Superblock is persistentUpdate Time : Fri Aug 1 23:34:12 2008State : clean, degraded, recoveringActive Devices : 2Working Devices : 3Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 64KRebuild Status : 0% completeUUID : 28a22990:eac5c231:3fe907f1:1145e264Events : 0.6Number Major Minor RaidDevice State0 8 16 0 active sync /dev/sdb1 8 32 1 active sync /dev/sdc2 8 48 2 spare rebuilding /dev/sdd |
需要注意的是,对于某些RAID级别,如RAID0,是不能用--fail --remove --add的。
5.监控
MONITOR模式
# nohup mdadm --monitor --mail root --delay 200 /dev/md0 & |
6.增加spare磁盘
可以通过在创建的时候指定冗余磁盘
#mdadm --create --verbose /dev/md0 --level=3 --raid-devices=3 -x1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 |
7.删除RAID
#mdadm -S /dev/md0或#rm /dev/md0删除/etc/mdadm.conf文件;去除/etc/fstab文件中相关的行。最后,用fdisk对磁盘进行重新分区。 |
[root@TSM54-Test ~]# fdisk /dev/sdeThe number of cylinders for this disk is set to 1044.There is nothing wrong with that, but this is larger than 1024,and could in certain setups cause problems with:1) software that runs at boot time (e.g., old versions of LILO)2) booting and partitioning software from other OSs(e.g., DOS FDISK, OS/2 FDISK)Command (m for help): pDisk /dev/sde: 8589 MB, 8589934592 bytes255 heads, 63 sectors/track, 1044 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDevice Boot Start End Blocks Id SystemCommand (m for help): /dev/sde1 1 609 4891761 fd Linux raid autodetect/dev/sde2 610 1044 3494137+ fd Linux raid autodetectCommand (m for help): wThe partition table has beenCalling ioctl() to re-read partition table.Syncing disks. |
[root@TSM54-Test dev]# cd /dev/[root@TSM54-Test dev]# ls -l md0brw-r----- 1 root disk 9, 0 Aug 1 21:58 md0[root@TSM54-Test dev]# mknod md1 b 9 1[root@TSM54-Test dev]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sde1 /dev/sde2mdadm: largest drive (/dev/sde1) exceed size (3494016K) by more than 1%Continue creating array? |
[root@TSM54-Test dev]# mdadm -DsARRAY /dev/md0 level=raid5 num-devices=3 UUID=28a22990:eac5c231:3fe907f1:1145e264ARRAY /dev/md1 level=raid1 num-devices=2 UUID=57f24dd1:aed3606c:d467132e:a6b3a010 |
首先确保/dev/md0已经卸载,使用#umount /mnt/software
(1)创建PV
[root@TSM54-Test ~]# pvcreate /dev/md0Physical volume "/dev/md0" successfully created[root@TSM54-Test ~]# pvcreate /dev/md1Physical volume "/dev/md1" successfully created[root@TSM54-Test ~]# pvdisplay--- Physical volume ---PV Name /dev/sda2VG Name VolGroup00PV Size 13.90 GB / not usable 21.45 MBAllocatable yes (but full)PE Size (KByte) 32768Total PE 444Free PE 0Allocated PE 444PV UUID BntsgG-UJLv-agT2-lZ7C-dXY2-51FB-Jxd5tA--- NEW Physical volume ---PV Name /dev/md0VG NamePV Size 16.00 GBAllocatable NOPE Size (KByte) 0Total PE 0Free PE 0Allocated PE 0PV UUID GriJk2-wyfl-o0CI-NY7t-g75X-zIx3-FJHf1u--- NEW Physical volume ---PV Name /dev/md1VG NamePV Size 3.33 GBAllocatable NOPE Size (KByte) 0Total PE 0Free PE 0Allocated PE 0PV UUID SImCO1-RmvK-OgfZ-dCFZ-LJNC-8wun-Bd9qzS |
[root@TSM54-Test ~]# vgcreate LVMonRaid /dev/md0 /dev/md1 Volume group "LVMonRaid" successfully created[root@TSM54-Test ~]# vgscanReading all physical volumes. This may take a while...Found volume group "VolGroup00" using metadata type lvm2Found volume group "LVMonRaid" using metadata type lvm2 |
(3)创建LV
[root@TSM54-Test ~]# lvcreate --size 5000M --name LogicLV1 LVMonRaidLogical volume "LogicLV1" created[root@TSM54-Test ~]# lvcreate --size 5000M --name LogicLV2 LVMonRaidLogical volume "LogicLV2" created[root@TSM54-Test ~]# lvscanACTIVE '/dev/VolGroup00/LogVol00' [12.88 GB] inheritACTIVE '/dev/VolGroup00/LogVol01' [1.00 GB] inheritACTIVE '/dev/LVMonRaid/LogicLV1' [4.88 GB] inheritACTIVE '/dev/LVMonRaid/LogicLV2' [4.88 GB] inherit 注:上面两条记录是装系统时默认创建的。 |
[root@TSM54-Test ~]# mkfs.ext3 /dev/LVMonRaid/LogicLV1mke2fs 1.39 (29-May-2006)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)640000 inodes, 1280000 blocks64000 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=131281715240 block groups32768 blocks per group, 32768 fragments per group16000 inodes per groupSuperblock backups stored on blocks:32768, 98304, 163840, 229376, 294912, 819200, 884736Writing inode tables: doneCreating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 35 mounts or180 days, whichever comes first. Use tune2fs
-c or -i to override.
[root@TSM54-Test ~]# mkfs.ext3 /dev/LVMonRaid/LogicLV2mke2fs 1.39 (29-May-2006)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)640000 inodes, 1280000 blocks64000 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=131281715240 block groups32768 blocks per group, 32768 fragments per group16000 inodes per groupSuperblock backups stored on blocks:32768, 98304, 163840, 229376, 294912, 819200, 884736Writing inode tables: doneCreating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 33 mounts or180 days, whichever comes first. Use tune2fs -c
or -i to override.
[root@TSM54-Test ~]#
mkdir /mnt/doc[root@TSM54-Test ~]# mkdir /mnt/music[root@TSM54-Test ~]# mount -t ext3 /dev/LVMonRaid/LogicLV1 /mnt/doc[root@TSM54-Test ~]# mount -t ext3 /dev/LVMonRaid/LogicLV2 /mnt/music
如果要开机自动挂载,更改/etc/fstab文件,添加如下两行:
/dev/LVMonRaid/LogicLV1 /mnt/doc ext3 defaults 0 0
/dev/LVMonRaid/LogicLV2 /mnt/music ext3 defaults 0 0
相关文章推荐
- Linux主机下配置Oracle 11 自动启动过程记录
- Linux安装详解-配置ks.cfg实现自动安装过程
- Linux LVM逻辑卷配置过程详解(创建,增加,减少,删除,卸载)
- linux内核编译过程及配置说明解释(5)--Bus options,Executable file formats,Emulations
- Linux LVM逻辑卷配置过程详解
- linux下配置squid http proxy过程
- Linux-JAVA-JDK完整配置过程
- Linux(RHEL 5)中Bind服务的安装与配置全过程-续
- Linux环境下配置NFS过程
- (转)Linux LVM逻辑卷配置过程详解(创建、扩展、缩减、删除、卸载、快照创建)
- CentOS系统安装过程中配置软RAID-0或RAID-1
- VMWare虚拟机下Linux Fedora core 6网络配置过程
- linux配置java环境变量详细过程
- Red Hat Linux 5.4 DDNS(DHCP+DNS)详细配置过程
- Linux 下Apache+MySQL+PHP安装及配置过程
- linux配置软raid/故障模拟
- Linux 下编译并安装配置 Qt 全过程
- 详解 linux 下的DNS配置过程及原理参数
- Linux 中/etc/profile、~/.bash_profile 等几个环境配置文件的执行过程
- LINUX下安装jdk过程及其环境变量配置