[mythtv-users] semiOT: minor emergency, raid array's contents deleted

John Drescher drescherjm at gmail.com
Mon Dec 7 17:20:36 UTC 2009


On Mon, Dec 7, 2009 at 12:16 PM, Steven Adeff <adeffs.mythtv at gmail.com> wrote:
> On Mon, Dec 7, 2009 at 12:03 PM, John Drescher <drescherjm at gmail.com> wrote:
>> On Mon, Dec 7, 2009 at 11:56 AM, Steven Adeff <adeffs.mythtv at gmail.com> wrote:
>>> so here's a weird one...
>>>
>>> was watching some tv then shutdown my frontend, ssh'd into the backend
>>> and noticed that both my raid arrays are now "empty". the filesystems,
>>> which are JFS, check out as fine, but both, on seperate computers, are
>>> now both completely blank. absolutely no idea how this happened, but
>>> I'm hoping someone here has an idea as to what may have happened and
>>> if there is a way to recover all my lost data?
>>>
>>
>> With linux software raid recovery is usually possible. I have not seen
>> a case where I could not recover and I run dozens of arrays at work.
>>
>> Your description of the problem is very unclear.
>>
>> can you post the output of
>>
>> cat /proc/mdstat
>>
>> Should look something like this:
>>  # cat /proc/mdstat
>> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
>> md0 : active raid5 sx8/1[0] sx8/4[4] sx8/3[3] sx8/5[2] sx8/2[1]
>>      976793600 blocks level 5, 256k chunk, algorithm 2 [5/5] [UUUUU]
>>
>> md1 : active raid6 sx8/6[0] sx8/15[9] sx8/14[8] sx8/12[6] sx8/11[5]
>> sx8/10[4] sx8/9[3] sx8/8[2] sx8/7[1]
>>      1953587200 blocks level 6, 512k chunk, algorithm 2 [10/9] [UUUUUUU_UU]
>>
>> unused devices: <none>
>>
>> well without the bad drive. I am working on that at the minute. I have
>> to either wait till everyone goes home or reboot the server to remove
>> the 5 drives from md0 so that I can replace them with 6x1TB drives and
>> use one of the old 250GB SATA1 disks in md1..
>>
>> John
>
> the array looks fine,
> Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0]
> [raid1] [raid10]
> md0 : active raid5 sda[0] sde1[5] sdf1[4] sdd1[3] sdc1[2] sdb1[1]
>      1465248000 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
>
> unused devices: <none>
>
> and the filesystems come back as "clean"
> # jfs_fsck /dev/md0
> jfs_fsck version 1.1.12, 24-Aug-2007
> processing started: 12/7/2009 12.6.12
> Using default parameter: -p
> The current device is:  /dev/md0
> Block size in bytes:  4096
> Filesystem size in blocks:  366312000
> **Phase 0 - Replay Journal Log
> Filesystem is clean.
>
> but it shows up as being empty. as if someone did a "rm * -f", which I
> didn't do explicitly, and it blanked both my array's, which makes me
> think something like a "rm * -rf" was run in where I mount them, I
> just don't know how since I haven't done anything that I can think
> would cause that, I've just been playing around with getting a
> diskless frontend going for the last week.
>
>

You may want to look into. Trying to undelete on jfs. Here is the
first thread I found:

http://www.mail-archive.com/jfs-discussion@www-124.ibm.com/msg01304.html

I have done undeletes on ext3 and reiserfs but not jfs.

John


More information about the mythtv-users mailing list