CentOS RAID1恢复记录

一台普通PC机,Centos5 LAMP架构,因数据重要,采用两块同样的硬盘做软RAID1。前不久发现一块不工作,在BIOS里无法检测到。于是取下坏盘,单盘独立工作。目前又加一块同样的硬盘来恢复软RAID1架构:(/dev/hda一直是好的。/dev/hdb坏掉,换上新的)

搞定步骤如下:

1.打开机箱,换上新硬盘(放在原来坏的接口上)

2.启动,进入系统

3.导出/dev/hda的分区

sfdisk -d /dev/hda > partition.hda

4.在新盘/dev/hdb上建立和/dev/hda同样的分区

sfdisk /dev/hdb < partition.hda Checking that no-one is using this disk right now ... OK Disk /dev/hdb: 17753 cylinders, 15 heads, 63 sectors/track Old situation: Units = cylinders of 483840 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/hdb1 0 - 0 0 0 Empty /dev/hdb2 0 - 0 0 0 Empty /dev/hdb3 0 - 0 0 0 Empty /dev/hdb4 0 - 0 0 0 Empty New situation: Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/hdb1 * 63 208844 208782 fd Linux raid autodetect /dev/hdb2 208845 14683409 14474565 fd Linux raid autodetect /dev/hdb3 14683410 16771859 2088450 fd Linux raid autodetect /dev/hdb4 0 - 0 0 Empty Successfully wrote the new partition table Re-reading the partition table ... If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) 5.不放心,可以查看一下分区,是否一样 [root@localhost ~]# fdisk -l Disk /dev/hda: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 fd Linux raid autodetect /dev/hda2 14 914 7237282+ fd Linux raid autodetect /dev/hda3 915 1044 1044225 fd Linux raid autodetect Disk /dev/hdb: 8589 MB, 8589934592 bytes 15 heads, 63 sectors/track, 17753 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hdb1 * 1 221 104391 fd Linux raid autodetect /dev/hdb2 222 15538 7237282+ fd Linux raid autodetect /dev/hdb3 15539 17748 1044225 fd Linux raid autodetect 6.查看当前RAID1工作情况(可以发现只有一块盘在工作) [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hda2[0] 7237184 blocks [2/1] md1 : active raid1 hda3[0] 1044160 blocks [2/1] md0 : active raid1 hda1[0] 104320 blocks [2/1] 7.把/dev/hdb盘加入到RAID1中,让他工作去吧(注意,不要加错了,可以对照上面RAID情况进行操作) [root@localhost ~]# mdadm /dev/md0 -a /dev/hdb1 mdadm: hot added /dev/hdb1 [root@localhost ~]# mdadm /dev/md1 -a /dev/hdb3 mdadm: hot added /dev/hdb3 [root@localhost ~]# mdadm /dev/md2 -a /dev/hdb2 mdadm: hot added /dev/hdb2 8.查看当前RAID1的工作情况(可以发现,md2正在恢复数据,恢复数据时间根据数据量大小而定) [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdb2[2] hda2[0] 7237184 blocks [2/1] [>………………..] recovery = 0.0% (1024/7237184) finish=109.6min s peed=1024K/sec

md1 : active raid1 hdb3[1] hda3[0]

1044160 blocks [2/2] [UU]

md0 : active raid1 hdb1[1] hda1[0]

104320 blocks [2/2] [UU]

9.出去找个MM闲聊几小时,再来看数据恢复情况

[root@localhost ~]# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 hdb2[1] hda2[0]

7237184 blocks [2/2] [UU]

md1 : active raid1 hdb3[1] hda3[0]

1044160 blocks [2/2] [UU]

md0 : active raid1 hdb1[1] hda1[0]

104320 blocks [2/2] [UU]

unused devices:



发表评论

电子邮件地址不会被公开。 必填项已用*标注

− 1 = 6