您的位置:首页 > 移动开发 > Cocos引擎

在cocos2d-x中实现真随机数

2013-12-16 23:26 399 查看
一、Raid的简介

Raid(独立冗余磁盘阵列)是一种有多块廉价磁盘构成的冗余阵列,它可以充分发挥多块硬盘的优势,可以提升硬盘速度,增大容量,提供容错功能确保数据安全性,易于管理的优点,在任何一块硬盘出现问题的情况下都可以继续工作,不会受到损坏硬盘的影响。Raid的工作模式主要分为0-6个级别。现在常用的Raid级别是0,1,4,5,6和10(就是Raid1和Raid0的组合)。下面简单介绍几种Raid的创建方法。

二、Raid0的创建

Raid0是将几块同样的硬盘用硬件的形式或者软件的形式串联起来,将数据依次写入到各磁盘中。其特点是:读写性能高,但无容错能力。以下是其在软件形式下将/dev/sda5和/dev/sda6创建为Raid0的过程:

1、将/dev/sda5和/dev/sda6的卷标改为fd:

[root@localhost ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 15665.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): t
Partition number (1-8): 5
Hex code (type L to list codes): fd

Command (m for help): t
Partition number (1-8): 6
Hex code (type L to list codes): fd

Command (m for help): p

Disk /dev/sda: 128.8 GB, 128849018880 bytes
255 heads, 63 sectors/track, 15665 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        5235    41945715   8e  Linux LVM
/dev/sda3            5236        5366     1052257+  82  Linux swap / Solaris
/dev/sda4            5367       15665    82726717+   5  Extended
/dev/sda5            5367        5732     2939863+  fd  Linux raid autodetect
/dev/sda6            5733        6098     2939863+  fd  Linux raid autodetect
/dev/sda7            6099        6464     2939863+  fd  Linux raid autodetect
/dev/sda8            6465        6830     2939863+  fd  Linux raid autodetect

Command (m for help): w

2、将/dev/sda5和/dev/sda6制作成Raid0

[root@localhost ~]# mdadm -C /dev/md0 -l 0 -n 2 /dev/sda{5,6}
mdadm: /dev/sda5 appears to contain an ext2fs file system
size=8819328K  mtime=Sat Jun 23 10:56:53 2012
mdadm: /dev/sda5 appears to be part of a raid array:
level=raid0 devices=2 ctime=Sat Jun 23 14:11:40 2012
mdadm: /dev/sda6 appears to be part of a raid array:
level=raid0 devices=2 ctime=Sat Jun 23 14:11:40 2012
Continue creating array? y
mdadm: array /dev/md0 started.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid10] [raid6] [raid5] [raid4] [raid0]
md0 : active raid0 sda6[1] sda5[0]
5879552 blocks 64k chunks

unused devices: <none>
[root@localhost ~]#

3、将/dev/md0格式化,并将其挂载,这样Raid0就建立了。

[root@localhost ~]# mke2fs -j /dev/md0
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
735840 inodes, 1469888 blocks
73494 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1505755136
45 block groups
32768 blocks per group, 32768 fragments per group
16352 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost ~]# mount /dev/md0 /data
[root@localhost ~]# ls /data
lost+found

二、Raid1的特点及创建

Raid1又称磁盘镜像,是把一个磁盘上的数据镜像到另一个磁盘上,这样Raid1具有很强的容错功能,但是写入性能较低,磁盘的利用率低,至少需要两块磁盘。其创建过程和Raid0 的创建过程基本相同,只是将mdadm命令中的参数改为:

mdadm -C /dev/md1 -l 1 -n 2 /dev/sda{5,6};其余的步骤和Raid0的创建过程相同,在此不做赘述。

三、Raid4的特点

Raid4增添了校验功能,其校验码(前两块磁盘异或之后得到的)放在第三块磁盘中,这样可以保证当一块磁盘坏掉后,仍能保持数据的完整性。至少需要三块硬盘,每次数据更新时有四次数据请求,所以访问数据的效率不高。

四、Raid5的特点及创建过程以及管理

Raid5和Raid4基本相同,只是它将校验码分散的存储在磁盘上,这样可以平均磁盘的负载,读写性能高。下面介绍具有空闲磁盘的Raid5的创建过程。

1、将/dev/sd{5,6,7,8}创建成具有空闲磁盘的Raid5。如下所示:

[root@localhost ~]# mdadm -C /dev/md1 -l 5 -n 3 -x 1 /dev/sda{5,6,7,8}
mdadm: /dev/sda5 appears to contain an ext2fs file system
size=5879552K  mtime=Sat Jun 23 14:15:52 2012
mdadm: /dev/sda5 appears to be part of a raid array:
level=raid0 devices=2 ctime=Sat Jun 23 14:12:50 2012
mdadm: /dev/sda6 appears to be part of a raid array:
level=raid0 devices=2 ctime=Sat Jun 23 14:12:50 2012
mdadm: /dev/sda7 appears to be part of a raid array:
level=raid5 devices=4 ctime=Sat Jun 23 10:38:54 2012
mdadm: /dev/sda8 appears to contain an ext2fs file system
size=8819328K  mtime=Sat Jun 23 10:56:53 2012
mdadm: /dev/sda8 appears to be part of a raid array:
level=raid5 devices=4 ctime=Sat Jun 23 10:38:54 2012
Continue creating array? y
mdadm: array /dev/md1 started.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid10] [raid6] [raid5] [raid4] [raid0]
md1 : active raid5 sda7[4] sda8[3](S) sda6[1] sda5[0]
5879552 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[==>..................]  recovery = 14.7% (434172/2939776) finish=2.7min speed=14971K/sec

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Sat Jun 23 14:39:31 2012
Raid Level : raid5
Array Size : 5879552 (5.61 GiB 6.02 GB)
Used Dev Size : 2939776 (2.80 GiB 3.01 GB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Sat Jun 23 14:39:31 2012
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 4
Failed Devices : 0
Spare Devices : 2

Layout : left-symmetric
Chunk Size : 64K

Rebuild Status : 30% complete

UUID : 2f398377:8996228f:5da4c195:b9923acd
Events : 0.1

Number   Major   Minor   RaidDevice State
0       8        5        0      active sync   /dev/sda5
1       8        6        1      active sync   /dev/sda6
4       8        7        2      spare rebuilding   /dev/sda7

3       8        8        -      spare   /dev/sda8

2、模拟当/dev/sda5坏掉时,/dev/sda8可以填补/sda5d的空缺。

[root@localhost ~]# mdadm -f /dev/md1 /dev/sda5
mdadm: set /dev/sda5 faulty in /dev/md1
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Sat Jun 23 14:39:31 2012
Raid Level : raid5
Array Size : 5879552 (5.61 GiB 6.02 GB)
Used Dev Size : 2939776 (2.80 GiB 3.01 GB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Sat Jun 23 14:45:38 2012
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 64K

Rebuild Status : 10% complete

UUID : 2f398377:8996228f:5da4c195:b9923acd
Events : 0.6

Number   Major   Minor   RaidDevice State
3       8        8        0      spare rebuilding   /dev/sda8
1       8        6        1      active sync   /dev/sda6
2       8        7        2      active sync   /dev/sda7

4       8        5        -      faulty spare   /dev/sda5
[root@localhost ~]#

这样就可以将sda5卸掉,换一块好的硬盘。注意:如果你只是做Raid测试的话,那么一定要关闭Raid,因为如果你不关闭时重新分区时,会出现一些莫名的错误。其关闭步骤如下:

[root@localhost ~]# mdadm -S /dev/md1
mdadm: stopped /dev/md1
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid10] [raid6] [raid5] [raid4] [raid0]
unused devices: <none>


本文出自 “是木成林” 博客,请务必保留此出处http://qingmu.blog.51cto.com/4571483/906776
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: