Linux总结笔记3-磁盘管理RAID,LVM
2017-05-28 16:42
555 查看
实验一(用4块硬盘做成RAID10)
说明:本实验在虚拟机里完成先关闭虚拟机,然后添加4块20G 的scsi硬盘,再启动虚拟机
用fdisk -l 命令查看新添加的4块硬盘是否被识别到了
[root@linux1 ~]# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x00098b09 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 41943039 20458496 8e Linux LVM Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sde: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdd: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/rhel-root: 18.8 GB, 18798870528 bytes, 36716544 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/rhel-swap: 2147 MB, 2147483648 bytes, 4194304 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
如上结果显示,系统已经识别成功了,如果识别不成功可尝试多次partprobe命令或者重启
使用mdadm命令来配置管理软RAID,格式为:”mdadm [模式] < RAID设备名称 > [选项] [成员设备名称]”
-C 表示创建,-v表示显示创建过程,-a 表示自动识别硬盘名称,-n 指定硬盘个数,-l 指定raid等级
[root@linux1 ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sdb /dev/sdc /dev/sdd /dev/sde mdadm: layout defaults to n2 mdadm: layout defaults to n2 mdadm: chunk size defaults to 512K mdadm: size set to 20954624K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
创建成功后,格式化
[root@linux1 ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 2621440 inodes, 10477312 blocks 523865 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2157969408 320 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
格式化成功后,挂载使用分区
[root@linux1 ~]# mkdir /RAID [root@linux1 ~]# mount /dev/md0 /RAID/ [root@linux1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 18G 834M 17G 5% / devtmpfs 908M 0 908M 0% /dev tmpfs 914M 0 914M 0% /dev/shm tmpfs 914M 8.5M 906M 1% /run tmpfs 914M 0 914M 0% /sys/fs/cgroup /dev/sda1 497M 96M 401M 20% /boot /dev/sr0 3.5G 3.5G 0 100% /media/cdrom /dev/md0 40G 49M 38G 1% /RAID
使用mdadm -D 参数查看raid磁盘的详细信息
[root@linux1 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Apr 9 00:39:13 2017 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Apr 9 00:40:18 2017 State : active, resyncing Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Resync Status : 35% complete Name : linux1:0 (local to host linux1) UUID : ae531b60:e8eaa90c:e65297d6:bdba4bf5 Events : 6 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde
把挂载信息写入fstab,以后开机自动挂载
[root@linux1 ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab [root@linux1 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Apr 7 20:04:23 2017 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/rhel-root / xfs defaults 1 1 UUID=bc96d34f-6f7c-41a1-bc85-1e18f5c6f6a3 /boot xfs defaults 1 2 /dev/mapper/rhel-swap swap swap defaults 0 0 /dev/cdrom /media/cdrom iso9660 defaults 0 0 /dev/md0 /RAID ext4 defaults 0 0
实验二(模拟其中一块磁盘损坏,基于实验一)
使用mdadm -f参数指定损坏的磁盘[root@linux1 ~]# mdadm /dev/md0 -f /dev/sdb mdadm: set /dev/sdb faulty in /dev/md0
再查看raid磁盘的详细信息 ,可以看到/dev/sdb已损坏
[root@linux1 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Apr 9 00:39:13 2017 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Apr 9 01:12:31 2017 State : active, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : linux1:0 (local to host linux1) UUID : ae531b60:e8eaa90c:e65297d6:bdba4bf5 Events : 37 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 0 8 16 - faulty /dev/sdb
由于RAID10允许阵列中有一块磁盘损坏,所以不影响文件系统的操作,还能正常使用
[root@linux1 ~]# mkdir /RAID/test2 [root@linux1 ~]# ll /RAID/ total 24 drwx------. 2 root root 16384 Apr 9 00:39 lost+found drwxr-xr-x. 2 root root 4096 Apr 9 00:40 test drwxr-xr-x. 2 root root 4096 Apr 9 01:14 test2
在真实环境中,我们只需把坏的硬盘拔出,然后插入新的硬盘,然后用mdadm命令添加即可,在虚拟机环境中只需重启即可重新添加硬盘
[root@linux1 ~]# umount /RAID/ [root@linux1 ~]# mdadm /dev/md0 -a /dev/sdb mdadm: added /dev/sdb
再次查看raid磁盘的详细信息,显示4块硬盘都已经在正常工作
[root@linux1 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Apr 9 00:39:13 2017 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Apr 9 01:21:54 2017 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : linux1:0 (local to host linux1) UUID : ae531b60:e8eaa90c:e65297d6:bdba4bf5 Events : 63 Number Major Minor RaidDevice State 4 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde
实验三(RAID+备份盘,用3块硬盘做RAID5,用1块硬盘做备份盘)
说明:本实验在虚拟机里完成先关闭虚拟机,然后添加4块20G 的scsi硬盘,再启动虚拟机
用fdisk -l 命令查看新添加的4块硬盘是否被识别到了
用mdadm命令创建raid
-C 表示创建,-v表示显示创建过程,-n 指定硬盘个数,-l 指定raid等级,-x 指定备份盘个数
[root@linux1 ~]# mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 20954624K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
查看raid磁盘的详细信息 ,可以看到/dev/sde 为备份盘
[root@linux1 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Apr 9 00:40:09 2017 Raid Level : raid5 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Apr 9 00:42:07 2017 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : linux1:0 (local to host linux1) UUID : f4e45153:711dc888:2cf0eead:d14eb816 Events : 18 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 4 8 48 2 active sync /dev/sdd 3 8 64 - spare /dev/sde
格式化,挂载使用
[root@linux1 ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 2621440 inodes, 10477312 blocks 523865 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2157969408 320 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
[root@linux1 ~]# mkdir /RAID
[root@linux1 ~]# mount /dev/md0 /RAID
[root@linux1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 18G 834M 17G 5% /
devtmpfs 908M 0 908M 0% /dev
tmpfs 914M 0 914M 0% /dev/shm
tmpfs 914M 8.5M 905M 1% /run
tmpfs 914M 0 914M 0% /sys/fs/cgroup
/dev/sda1 497M 96M 401M 20% /boot
/dev/sr0 3.5G 3.5G 0 100% /media/cdrom
/dev/md0 40G 49M 38G 1% /RAID
[root@linux1 ~]# mkdir /RAID/test3
[root@linux1 ~]# ll /RAID/
total 20
drwx------. 2 root root 16384 Apr 9 00:46 lost+found
drwxr-xr-x. 2 root root 4096 Apr 9 00:47 test3
使用mdadm -f 指定损坏的磁盘/dev/sdb
然后使用mdadm -D 查看raid磁盘的详细信息,可以看到之前的备份盘/dev/sde自动顶替上来,并自动重建数据,不影响使用
[root@linux1 ~]# mdadm /dev/md0 -f /dev/sdb mdadm: set /dev/sdb faulty in /dev/md0 [root@linux1 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Apr 9 00:40:09 2017 Raid Level : raid5 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Apr 9 00:49:20 2017 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 5% complete Name : linux1:0 (local to host linux1) UUID : f4e45153:711dc888:2cf0eead:d14eb816 Events : 20 Number Major Minor RaidDevice State 3 8 64 0 spare rebuilding /dev/sde 1 8 32 1 active sync /dev/sdc 4 8 48 2 active sync /dev/sdd 0 8 16 - faulty /dev/sdb [root@linux1 ~]# mkdir /RAID/test4 [root@linux1 ~]# ll /RAID/ total 24 drwx------. 2 root root 16384 Apr 9 00:46 lost+found drwxr-xr-x. 2 root root 4096 Apr 9 00:47 test3 drwxr-xr-x. 2 root root 4096 Apr 9 00:51 test4
实验四(部署逻辑卷LVM)
说明:本实验在虚拟机里完成先关闭虚拟机,然后添加2块20G 的scsi硬盘,再启动虚拟机
用fdisk -l 命令查看新添加的2块硬盘是否被识别到了,如果识别不成功可尝试多次partprobe命令或者重启
把新添加的两块硬盘创建成物理卷(PV)
并查看物理卷(PV)概况
[root@linux1 ~]# pvcreate /dev/sdb /dev/sdc Physical volume "/dev/sdb" successfully created Physical volume "/dev/sdc" successfully created [root@linux1 ~]# pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name rhel PV Size 19.51 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 4994 Free PE 0 Allocated PE 4994 PV UUID pH6uh1-U3az-LA01-LL5q-4Vgb-3Xr1-o1RVYy "/dev/sdc" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdc VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID QE87g4-c0TJ-6z4V-jm0A-6h9X-hKir-fsk9RR "/dev/sdb" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID VNGFVB-fy9p-zU8y-XJun-E788-UvTd-X2rY8z
创建卷组 storage
并把上一步创建的两个物理卷(PV)加入卷组(VG)
查看卷组概况
[root@linux1 ~]# vgcreate storage /dev/sdb /dev/sdc Volume group "storage" successfully created [root@linux1 ~]# vgdisplay --- Volume group --- VG Name rhel System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.51 GiB PE Size 4.00 MiB Total PE 4994 Alloc PE / Size 4994 / 19.51 GiB Free PE / Size 0 / 0 VG UUID IeLe37-2NyG-gmmI-mnje-M3mt-g6JT-O1BEVf --- Volume group --- VG Name storage System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.99 GiB PE Size 4.00 MiB Total PE 10238 Alloc PE / Size 0 / 0 Free PE / Size 10238 / 39.99 GiB VG UUID S2vI6p-2YL1-500p-UDBp-BBhx-pIBt-p5YKlr
在卷组里划分逻辑卷(LV),并查看逻辑卷概况
[root@linux1 ~]# lvcreate -n lv1 -L 200M storage Logical volume "lv1" created [root@linux1 ~]# lvdisplay --- Logical volume --- LV Path /dev/rhel/swap LV Name swap VG Name rhel LV UUID FG5iv8-tAYY-UgNP-bqFX-SVUi-WUiq-11489M LV Write Access read/write LV Creation host, time localhost, 2017-04-08 04:04:22 +0800 LV Status available # open 2 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/rhel/root LV Name root VG Name rhel LV UUID X4zVcc-sz2R-JqZt-Nqf2-9NF8-Qfar-oHtnaG LV Write Access read/write LV Creation host, time localhost, 2017-04-08 04:04:23 +0800 LV Status available # open 1 LV Size 17.51 GiB Current LE 4482 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/storage/lv1 LV Name lv1 VG Name storage LV UUID CRPQcp-YeDi-psDP-SAxH-LYBl-mPh8-Z8K12D LV Write Access read/write LV Creation host, time linux1, 2017-04-09 05:41:32 +0800 LV Status available # open 0 LV Size 200.00 MiB Current LE 50 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:2
格式化逻辑卷,并挂载使用
[root@linux1 ~]# mkfs.ext4 /dev/storage/lv1 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 51200 inodes, 204800 blocks 10240 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=33816576 25 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done [root@linux1 ~]# mkdir /lvm [root@linux1 ~]# mount /dev/storage/lv1 /lvm [root@linux1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 18G 833M 17G 5% / devtmpfs 908M 0 908M 0% /dev tmpfs 914M 0 914M 0% /dev/shm tmpfs 914M 8.4M 906M 1% /run tmpfs 914M 0 914M 0% /sys/fs/cgroup /dev/sr0 3.5G 3.5G 0 100% /media/cdrom /dev/sda1 497M 96M 401M 20% /boot /dev/mapper/storage-lv1 190M 1.6M 175M 1% /lvm
实验五(扩容逻辑卷,基于实验四)
扩容逻辑卷前,需要先卸载逻辑卷[root@linux1 ~]# umount /lvm
将实验四所创建的的逻辑卷lv1扩容至500M
[root@linux1 ~]# lvextend -L 500M /dev/storage/lv1 Extending logical volume lv1 to 500.00 MiB Logical volume lv1 successfully resized
检查磁盘完整性,重置硬盘容量
[root@linux1 ~]# e2fsck -f /dev/storage/lv1 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/storage/lv1: 11/51200 files (0.0% non-contiguous), 12115/204800 blocks [root@linux1 ~]# resize2fs /dev/storage/lv1 resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/storage/lv1 to 512000 (1k) blocks. The filesystem on /dev/storage/lv1 is now 512000 blocks long.
重新挂载使用,并查看大小
[root@linux1 ~]# mount /dev/storage/lv1 /lvm/ [root@linux1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 18G 833M 17G 5% / devtmpfs 908M 0 908M 0% /dev tmpfs 914M 0 914M 0% /dev/shm tmpfs 914M 8.4M 906M 1% /run tmpfs 914M 0 914M 0% /sys/fs/cgroup /dev/sr0 3.5G 3.5G 0 100% /media/cdrom /dev/sda1 497M 96M 401M 20% /boot /dev/mapper/storage-lv1 481M 2.3M 449M 1% /lvm
实验六(缩小逻辑卷,基于实验五)
对逻辑卷的缩小可能会导致数据丢失,所以操作前要先注意备份好缩小逻辑卷前,需要先卸载逻辑卷
[root@linux1 ~]# umount /lvm
检查文件系统的完整性
[root@linux1 ~]# e2fsck -f /dev/storage/lv1 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/storage/lv1: 11/129024 files (0.0% non-contiguous), 22696/512000 blocks
将实验五的逻辑卷lv1缩小到100M
[root@linux1 ~]# resize2fs /dev/storage/lv1 100M resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/storage/lv1 to 102400 (1k) blocks. The filesystem on /dev/storage/lv1 is now 102400 blocks long. [root@linux1 ~]# lvreduce -L 100M /dev/storage/lv1 WARNING: Reducing active logical volume to 100.00 MiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lv1? [y/n]: y Reducing logical volume lv1 to 100.00 MiB Logical volume lv1 successfully resized
重新挂载使用并查看挂载情况
[root@linux1 ~]# mount /dev/storage/lv1 /lvm/ [root@linux1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 18G 833M 17G 5% / devtmpfs 908M 0 908M 0% /dev tmpfs 914M 0 914M 0% /dev/shm tmpfs 914M 8.5M 906M 1% /run tmpfs 914M 0 914M 0% /sys/fs/cgroup /dev/sr0 3.5G 3.5G 0 100% /media/cdrom /dev/sda1 497M 96M 401M 20% /boot /dev/mapper/storage-lv1 93M 1.6M 85M 2% /lvm
实验七(逻辑卷快照)
创建逻辑卷快照前需要确定对哪个逻辑卷进行创建快照,并确保对应逻辑卷所对应的卷组有足够的空间开创建相同大小的逻辑卷快照文件先用lvdisplay查看逻辑卷的大小,再用vgdisplay查看卷组的空间是否足够。(逻辑卷快照的大小一般和所要创建快照的逻辑卷大小一致即可)
[root@linux1 ~]# lvdisplay --- Logical volume --- LV Path /dev/storage/lv1 LV Name lv1 VG Name storage LV UUID CRPQcp-YeDi-psDP-SAxH-LYBl-mPh8-Z8K12D LV Write Access read/write LV Creation host, time linux1, 2017-04-09 05:41:32 +0800 LV Status available # open 1 LV Size 100.00 MiB Current LE 25 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:2 [root@linux1 ~]# vgdisplay --- Volume group --- VG Name storage System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.99 GiB PE Size 4.00 MiB Total PE 10238 Alloc PE / Size 25 / 100.00 MiB Free PE / Size 10213 / 39.89 GiB VG UUID S2vI6p-2YL1-500p-UDBp-BBhx-pIBt-p5YKlr
使用-s参数来生成一个逻辑卷快照,-L指定逻辑卷快照大小,-n指定逻辑卷快照名称
[root@linux1 ~]# lvcreate -L 100M -s -n SNAPSHOT /dev/storage/lv1 Logical volume "SNAPSHOT" created [root@linux1 ~]# lvdisplay --- Logical volume --- LV Path /dev/storage/SNAPSHOT LV Name SNAPSHOT VG Name storage LV UUID wAXd5q-Ga9l-dxG2-XsJ3-ZPBO-CaCC-cZgXhz LV Write Access read/write LV Creation host, time linux1, 2017-04-09 06:18:54 +0800 LV snapshot status active destination for lv1 LV Status available # open 0 LV Size 100.00 MiB Current LE 25 COW-table size 100.00 MiB COW-table LE 25 Allocated to snapshot 0.01% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:3
对逻辑卷进行快照合并还原操作,操作前先卸载逻辑卷lv1
[root@linux1 ~]# umount /lvm [root@linux1 ~]# lvconvert --merge /dev/storage/SNAPSHOT Merging of volume SNAPSHOT started. lv1: Merged: 100.0% Merge of snapshot into logical volume lv1 has finished. Logical volume "SNAPSHOT" successfully removed
重新挂载使用
[root@linux1 ~]# mount /lvm
实验八(删除逻辑卷)
删除逻辑卷前应先备份,然后依次删除LV->VG->PV卸载逻辑卷
[root@linux1 ~]# umount /lvm/
依次删除LV->VG->PV
[root@linux1 ~]# lvremove /dev/storage/lv1 Do you really want to remove active logical volume lv1? [y/n]: y Logical volume "lv1" successfully removed [root@linux1 ~]# vgremove storage Volume group "storage" successfully removed [root@linux1 ~]# pvremove /dev/sdb /dev/sdc Labels on physical volume "/dev/sdb" successfully wiped Labels on physical volume "/dev/sdc" successfully wiped
相关文章推荐
- Linux 磁盘管理 六(Raid、LVM、Quota)
- LINUX学习笔记(四)RAID LVM
- Linux学习笔记(十)--RedHat 7.0使用RAID与LVM磁盘阵列技术
- linux整理笔记之十:Raid与Lvm的综合应用实例
- Linux高级磁盘管理 RAID和LVM
- Linux 磁盘管理 一(Raid、LVM、Quota)
- Linux 磁盘管理 二(Raid、LVM、Quota)
- Linux 磁盘管理 三(Raid、LVM、Quota)
- 文档总结16-linux中磁盘管理,LVM与SELINUX
- Linux 磁盘管理 四(Raid、LVM、Quota)
- Linux 磁盘管理 高级篇 -- quota,RAID,LVM
- Linux运维笔记-文档总结- 逻辑卷管理器LVM建立以及LVM扩展,缩减,快照,删除
- Linux 磁盘管理 五(Raid、LVM、Quota)
- LINUX学习笔记(四)RAID LVM
- [实习笔记]Linux学习总结------非正式版
- Linux运维笔记-文档总结-系统虚拟机管理
- 一些Linux命令简要笔记——磁盘管理
- Linux 计划任务 Crontab 笔记与总结(3)Crontab 配置文件
- Linux新手生存笔记[13]——SVN命令总结
- Linux运维笔记-文档总结-Linux中系统服务管理