您的位置:首页 > 大数据 > 人工智能

16.文件系统――软RAID的实现(三)(RAID5、装配RAID、JBOD)

2014-08-14 23:24 302 查看
一、RAID5的实现过程
前文介绍了软RAID0和软RAID1创建过程,本文将演示一个大小为2G的软RAID5的实现过程。由于RAID5至少需要3块盘,其中有一块盘作为备份,因此只有两块盘是真正用来存放数据的,故这三块盘的大小都必须为1G,才能保证有效使用空间为2G。在创建带热备份的RAID1时,已经使用了3块盘,现在可以先将这三块盘停止使用,然后用来创建RAID5:[root@localhost ~]# umount /mnt
# 卸载设备/dev/md1

[root@localhost ~]# mdadm -S/dev/md1
# 停用设备/dev/md1
mdadm: stopped /dev/md1
[root@localhost ~]# mdadm -C/dev/md1 -a yes -l 5 -n 3 -c 256 /dev/sdb7 /dev/sd{c,d}2
# 创建一个RAID5设备,chunk大小为256k
mdadm: /dev/sdb7 appears to bepart of a raid array:
level=raid1 devices=2 ctime=Tue Aug 1221:01:59 2014
mdadm: /dev/sdc2 appears to bepart of a raid array:
level=raid1 devices=2 ctime=Tue Aug 1221:01:59 2014
mdadm: /dev/sdd2 appears to bepart of a raid array:
level=raid1 devices=2 ctime=Tue Aug 1221:01:59 2014
Continue creating array? Y
# 由于磁盘中有数据,故提示是否覆盖
mdadm: Defaulting to version 1.2metadata
mdadm: array /dev/md1 started.
[root@localhost ~]# cat/proc/mdstat
Personalities : [raid0] [raid1][raid6] [raid5] [raid4]
md1 : active raid5 sdd2[3]sdc2[1] sdb7[0]
2118144 blocks super 1.2 level 5, 256kchunk, algorithm 2 [3/2] [UU_]
[===>.................]  recovery = 15.4% (164224/1059072)finish=1.5min speed=9660K/sec
# 成功创建了RAID5设备/dev/md1
md0 : active raid0 sdd1[1]sdc1[0]
10506240 blocks super 1.2 512kchunks
unused devices: <none>
[root@localhost ~]# mdadm -D/dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Aug 12 22:39:03 2014
Raid Level : raid5
Array Size : 2118144 (2.02 GiB 2.17 GB)
Used Dev Size : 1059072 (1034.42 MiB 1084.49 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Tue Aug 12 22:40:37 2014
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
#左对称
Chunk Size : 256K

Rebuild Status : 93% complete

Name : localhost.localdomain:1  (local to host localhost.localdomain)
UUID :832c784c:06027e93:3d9f1f3f:553714a2
Events : 15

Number  Major   Minor   RaidDevice State
0      8       23        0     active sync   /dev/sdb7
1      8       34        1     active sync   /dev/sdc2
3      8       50        2     spare rebuilding   /dev/sdd2
[root@localhost ~]# mke2fs -text4 /dev/md1
#格式化设备/dev/md1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=64 blocks, Stripewidth=128 blocks
132464 inodes, 529536 blocks
26476 blocks (5.00%) reserved forthe super user
First data block=0
Maximum filesystemblocks=545259520
17 block groups
32768 blocks per group, 32768fragments per group
7792 inodes per group
Superblock backups stored onblocks:
32768, 98304, 163840, 229376, 294912

Writing inode tables: done
Creating journal (16384 blocks):done
Writing superblocks andfilesystem accounting information: done

This filesystem will beautomatically checked every 22 mounts or
180 days, whichever comesfirst.  Use tune2fs -c or -i to override.
[root@localhost ~]# mount/dev/md1 /mnt
#将新建的RAID5设备挂载到/mnt[root@localhost ~]# cp/etc/inittab /mnt

[root@localhost ~]# cat/mnt/inittab
#能读取其中的文件,说明设备能够正常工作

# inittab is only used by upstartfor the default runlevel.
# ADDING OTHER CONFIGURATION HEREWILL HAVE NO EFFECT ON YOUR SYSTEM.
# System initialization isstarted by /etc/init/rcS.conf
# Individual runlevels arestarted by /etc/init/rc.conf
# Ctrl-Alt-Delete is handled by/etc/init/control-alt-delete.conf
# Terminal gettys are handled by/etc/init/tty.conf and /etc/init/serial.conf,
# with configuration in/etc/sysconfig/init.
RAID5也可以在其中一块磁盘损坏的情况下继续使用[root@localhost ~]# mdadm/dev/md1 -f /dev/sdc2
mdadm: set /dev/sdc2 faulty in/dev/md1
#模拟/dev/sdc2坏了的情况

[root@localhost ~]# mdadm -D/dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Aug 12 22:39:03 2014
Raid Level : raid5
Array Size : 2118144 (2.02 GiB 2.17 GB)
Used Dev Size : 1059072 (1034.42 MiB 1084.49 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Tue Aug 12 22:50:48 2014
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 256K

Name : localhost.localdomain:1  (local to host localhost.localdomain)
UUID :832c784c:06027e93:3d9f1f3f:553714a2
Events : 21

Number  Major   Minor   RaidDevice State
0      8       23        0     active sync   /dev/sdb7
1      0        0        1     removed
3      8       50        2     active sync   /dev/sdd2

1      8       34        -     faulty   /dev/sdc2
#设备/dev/sdc2已经坏了

[root@localhost ~]# cat /mnt/inittab
#坏了一块盘的情况下依然可以正常工作
# inittab is only used by upstartfor the default runlevel.
# ADDING OTHER CONFIGURATION HEREWILL HAVE NO EFFECT ON YOUR SYSTEM.
# System initialization isstarted by /etc/init/rcS.conf
# Individual runlevels arestarted by /etc/init/rc.conf
# Ctrl-Alt-Delete is handled by/etc/init/control-alt-delete.conf
# Terminal gettys are handled by/etc/init/tty.conf and /etc/init/serial.conf,
# with configuration in/etc/sysconfig/init.
如果再坏掉一块盘,会发生什么情况呢[root@localhost ~]# mdadm /dev/md1-f /dev/sdd2
mdadm: set /dev/sdd2 faulty in/dev/md1
[root@localhost ~]# mdadm -D/dev/md1
/dev/md1:
Version : 1.2
Creation Time : TueAug 12 22:39:03 2014
Raid Level :raid5
Array Size :2118144 (2.02 GiB 2.17 GB)
Used Dev Size :1059072 (1034.42 MiB 1084.49 MB)
Raid Devices : 3
Total Devices : 3
Persistence :Superblock is persistent

Update Time : TueAug 12 22:55:34 2014
State :clean, FAILED
Active Devices : 1
Working Devices : 1
Failed Devices : 2
Spare Devices : 0

Layout :left-symmetric
Chunk Size : 256K

Name :localhost.localdomain:1  (local to hostlocalhost.localdomain)
UUID :832c784c:06027e93:3d9f1f3f:553714a2
Events : 25

Number   Major  Minor   RaidDevice State
0       8      23        0      active sync   /dev/sdb7
1       0       0        1      removed
2       0       0        2      removed

1       8      34        -      faulty  /dev/sdc2
3      8       50        -     faulty   /dev/sdd2
#可以看到磁盘已经坏了两块了


[root@localhost ~]# cat/mnt/inittab
# inittab is only used by upstartfor the default runlevel.
# ADDING OTHER CONFIGURATION HEREWILL HAVE NO EFFECT ON YOUR SYSTEM.
# System initialization isstarted by /etc/init/rcS.conf
# Individual runlevels arestarted by /etc/init/rc.conf
# Ctrl-Alt-Delete is handled by/etc/init/control-alt-delete.conf
# Terminal gettys are handled by/etc/init/tty.conf and /etc/init/serial.conf,
# with configuration in/etc/sysconfig/init.

坏了两块盘的RAID5依然可以访问其中的文件,这是不正常的现象,其原因已不在设备本身了。现在可以先卸载设备/dev/md1:
[root@localhost ~]# umount /mnt
# 卸载设备/dev/md1[root@localhost ~]# mount/dev/md1 /mnt
mount: wrong fs type, bad option, bad superblock on/dev/md1,
missingcodepage or helper program, or other error
In some casesuseful info is found in syslog - try
dmesg |tail  or so
#这时无法再挂载设备/dev/md1了,说明之前看到的数据并不是来自于该设备的


RAID5 也支持使用备盘

二、装配RAID设备到目前为止介绍的都是RAID设备的创建,那么如何装配一个现有的RAID设备呢?假设主机1上有一组RAID5设备,现在主机1坏了,要将该组RAID5设备装配到主机2上,则不可以重新创建,那样会毁坏原有数据,因此需要用到装配模式。主机1上的RAID5设备编号为/dev/md2,它使用的三块分区可能是/dev/sd{b,c,d}2,那么将其移植到主机2上后,这三块分区所在的硬盘很有可能被识别为/dev/sd{e,f,g},且RAID编号/dev/md2已经被占用了,那么就意味着需要将其RAID编号重新指定为/dev/md3,这时就需要使用到-A选项,来重新装配RAID设备:mdadm -A /dev/md3 -a yes -l 5 -n 3 /dev/sd{e,f,g}2# 注意,RAID的级别一定要和原来保持一致。


三、JBODJBOD也是将多块硬盘组合成一个完整的设备来使用,不同之处在于JBOD不是将数据同时分配到多个设备上,而是简单的将设备连接起来使用,即填满一个设备后再使用第二个设备。比如一个数据库的文件需要500G的磁盘空间,但现有硬盘只有300G,那么就需要将两块硬盘和在一起形成一个600G的设备,使用完了第一个设备上的300G之后再使用第二个设备上的300G。JBOD在进行大数据的处理时会比较有用,比如hadoop,因为hadoop本身就具有数据冗余功能,所以没有必要使用RAID设备,但hadoop需要的存储空间又非常大,这时JBOD就派上用场了。mdadm命令也支持JBOD。

本文出自 “重剑无锋 大巧不工” 博客,请务必保留此出处http://wuyelan.blog.51cto.com/6118147/1540237
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: