I have 4 x 3TB NAS setup as RAID5 which has been working great for almost a year.
After a recent abrupt shutdown (had to hit the power button) the RAID will no longer mount on boot up.
I've run:
mdadm --examine /dev/sd[bcdefghijklmn]1 >> raid.status
The output is below:
/dev/sda:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7d2a94ca:d9a42ca9:a4e6f976:8b5ca26b
Name : BruceLee:0 (local to host BruceLee)
Creation Time : Mon Feb 4 23:07:01 2013
Raid Level : raid5
Raid Devices : 4Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
Array Size : 8790405888 (8383.18 GiB 9001.38 GB)
Used Dev Size : 5860270592 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : active
Device UUID : 2c1e0041:21d926d6:1c69aa87:f1340a12Update Time : Sat Dec 27 20:54:55 2014
Checksum : d94ccaf5 - correct
Events : 17012Layout : left-symmetric
Chunk Size : 128KDevice Role : Active device 0
Array State : AAA. ('A' == active, '.' == missing)
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7d2a94ca:d9a42ca9:a4e6f976:8b5ca26b
Name : BruceLee:0 (local to host BruceLee)
Creation Time : Mon Feb 4 23:07:01 2013
Raid Level : raid5
Raid Devices : 4Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
Array Size : 8790405888 (8383.18 GiB 9001.38 GB)
Used Dev Size : 5860270592 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : active
Device UUID : a0261c8f:8a2fbb93:4093753a:74e7c5f5Update Time : Sat Dec 27 20:54:55 2014
Checksum : 7b84067b - correct
Events : 17012Layout : left-symmetric
Chunk Size : 128KDevice Role : Active device 1
Array State : AAA. ('A' == active, '.' == missing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7d2a94ca:d9a42ca9:a4e6f976:8b5ca26b
Name : BruceLee:0 (local to host BruceLee)
Creation Time : Mon Feb 4 23:07:01 2013
Raid Level : raid5
Raid Devices : 4Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
Array Size : 8790405888 (8383.18 GiB 9001.38 GB)
Used Dev Size : 5860270592 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : active
Device UUID : 9dc56e9e:d6b00f7a:71da67c7:38b7436cUpdate Time : Sat Dec 27 20:54:55 2014
Checksum : 749b3dba - correct
Events : 17012Layout : left-symmetric
Chunk Size : 128KDevice Role : Active device 2
Array State : AAA. ('A' == active, '.' == missing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7d2a94ca:d9a42ca9:a4e6f976:8b5ca26b
Name : BruceLee:0 (local to host BruceLee)
Creation Time : Mon Feb 4 23:07:01 2013
Raid Level : raid5
Raid Devices : 4Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
Array Size : 8790405888 (8383.18 GiB 9001.38 GB)
Used Dev Size : 5860270592 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 81e5776f:2a466bee:399251a0:ab60e9a4Update Time : Sun Nov 2 09:07:02 2014
Checksum : cb4aebaf - correct
Events : 159Layout : left-symmetric
Chunk Size : 128KDevice Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing)
When checking the Disks in Ubuntu Disk Manager sda/b/c are are showing as OK and sdd is showing as OK with 64 bad sectors
If I run fsck /dev/md0
It reads:
fsck.ext2: Invalid argument while trying to open /dev/md0
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
Lastly if I run
mdadm --examine /dev/sd[a-d] | egrep 'Event|/dev/sd'
I get:
/dev/sda:
Events : 17012
/dev/sdb:
Events : 17012
/dev/sdc:
Events : 17012
/dev/sdd:
Events : 159
If I run cat /proc/mdstat
I get:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[1](S) sdc[2](S) sdd[3](S) sda[0](S)
1172054204Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[1](S) sdc[2](S) sdd[3](S) sda[0](S)
11720542048 blocks super 1.2unused devices: <none>
8 blocks super 1.2
unused devices: <none>
Lastly running file -s /dev/md0
I get:
/dev/md0: empty
Basically I think I need to run --assemble on the RAID but I'm afraid of losing my data but also that 4th drive concerns me a little.
Could someone advise of the best next logical steps to get this up and running again?
Aucun commentaire:
Enregistrer un commentaire