After deleting some files on the shares MyBook World 1Tb Nas failed and I receive the next message on the main page of the interface browser: “Warning: is_dir(): Stat failed for /DataVolume/_torrent_ (errno=5 - Input/output error) in /proto/SxM_webui/ctcs/ctcsconfig.inc on line 24”. After some digging on the internet and connected to SSH interface I realized the MBWED don’t mount the shares:
so when I tried to see if the partition from the hard drive are still intact so I run the fdisk -l command:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 5 248 1959930 fd Linux raid autodetect
/dev/sda2 249 280 257040 fd Linux raid autodetect
/dev/sda3 281 403 987997+ fd Linux raid autodetect
/dev/sda4 404 121601 973522935 fd Linux raid autodetect
when i tried to mount manualy the /dev/sda4 partition i received the next msg:
/$ mount /dev/sda4 /shares
mount: Mounting /dev/sda4 on /shares failed: Device or resource busy
I don’t know what to do and the data are vital, so factory reset is out from the question !
enter666 wrote:> so when I tried to see if the partition from the hard drive are still intact so I run the fdisk -l command:
Device Boot Start End Blocks Id System
/dev/sda1 5 248 1959930 fd Linux raid autodetect
/dev/sda2 249 280 257040 fd Linux raid autodetect
/dev/sda3 281 403 987997+ fd Linux raid autodetect
/dev/sda4 404 121601 973522935 fd Linux raid autodetect
when i tried to mount manualy the /dev/sda4 partition i received the next msg:
/$ mount /dev/sda4 /shares
mount: Mounting /dev/sda4 on /shares failed: Device or resource busy
I don’t know what to do and the data are vital, so factory reset is out from the question !
Hmmm, well you’re logged into the SSH interface. So the system should be up and running and those partitions should be mounted.
My MBWE failed last week, and I recovered the data off it over the last couple of days (most of the time spent copying files, some spent thinking). So I can’t check exactly how the commands would work SSHing into the MBWE.
When I mounted my drive in a USB enclosure on my Ubuntu laptop, I saw essentially the same partition table as you see - so your data is almost certainly OK.
I then had to install the mdadm package (multiple-disc-administration, or something similar) on my laptop ; I would expect it to be present on the MBWE already. Check with something like “mdadm --help” on your SSH login.
If you’ve got that, try a “cat /proc/mdstat” ; you should be told which RAID devices are set up on the machine.
Now, you’re going to need to read and understand what is coming back from the machine. If you’ve got an array, then the version of mdadm on the MBWE should have a tool for telling you it’s name and mount point. Then those mount points should be listed in /etc/fstab to make them accessible to users.
Oh, hang on, you said :
when i tried to mount manualy the /dev/sda4 partition i received the next msg:
/$ mount /dev/sda4 /shares
mount: Mounting /dev/sda4 on /shares failed: Device or resource busy
IT’S BUSY BECAUSE IT’S MOUNTED BY MDADM (or it’s equivalent) !!So … you need to use the tools of the mdadm package to find out what the mount points are. But by the time I got to that point, I was minutes away from breathing a huge sigh of relief and gettingo n with copyingt data onto another drive.
First of all my HDD is still in the MBWE it’s not attached to a PC, and my version of MBWE is only with one harddrive so i don’t really know if is using RAID to writte data on the HDD.
The file sistems are :
/proc$ df -h
Filesystem Size Used Available Use% Mounted on
/dev/md0 1.8G 121.7M 1.6G 7% /
/dev/md3 949.6M 120.5M 780.8M 13% /var
/dev/md2 928.3G 885.8G 42.5G 95% /DataVolume
/dev/ram0 61.9M 20.0k 61.9M 0% /mnt/ram
I tried
root$ mdadm --assemble /dev/sda4
mdadm: device /dev/sda4 exists but is not an md array.
I run even fstab:
/root$ /etc/fstab
-sh: /etc/fstab: not found
I run the mdstat command and i received this soo i managed to see the raid table are ok.
/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md1 : active raid1 sda2[0]
256960 blocks [2/1] [U\_]
md3 : active raid1 sda3[0]
987904 blocks [2/1] [U\_]
md2 : active raid1 sda4[0]
973522816 blocks [2/1] [U\_]
md0 : active raid1 sda1[0]
1959808 blocks [2/1] [U\_]
and mounted the md2 partition on DataVolume so the problem is when i try to list the content of the datavolume folder i received
/DataVolume$ ls -a
ls: .: Input/output error
After some diging on internet for a solution to my problem i discovered something intresting:
/$ dmesg
…
Filesystem “md2”: Disabling barriers, not supported by the underlying device
XFS mounting filesystem md2
Starting XFS recovery on filesystem: md2 (logdev: internal)
Filesystem “md2”: corrupt dinode 2705308764, (btree extents). Unmount and run xfs_repair.
Filesystem “md2”: XFS internal error xfs_bmap_read_extents(1) at line 4551 of file fs/xfs/xfs_bmap.c. Caller 0xc013a3a8
[] (dump_stack+0x0/0x14) from [] (xfs_error_report+0x54/0x64)
[] (xfs_error_report+0x0/0x64) from [] (xfs_bmap_read_extents+0x5b8/0x61c)
r4:a13fbc5c
[] (xfs_bmap_read_extents+0x0/0x61c) from [] (xfs_iread_extents+0x88/0xf8)
[] (xfs_iread_extents+0x0/0xf8) from [] (xfs_bunmapi+0xadc/0xf90)
r7:00000002 r6:00000000 r5:c7844c98 r4:00000000
[] (xfs_bunmapi+0x0/0xf90) from [] (xfs_itruncate_finish+0x228/0x3d0)
[] (xfs_itruncate_finish+0x0/0x3d0) from [] (xfs_inactive+0x444/0x4a4)
[] (xfs_inactive+0x0/0x4a4) from [] (xfs_fs_clear_inode+0x58/0x90)
[] (xfs_fs_clear_inode+0x0/0x90) from [] (clear_inode+0x64/0x110)
r6:c7fff440 r5:c7843ce0 r4:c7843ce0
[] (clear_inode+0x0/0x110) from [] (generic_delete_inode+0xec/0x104)
r4:00000000
[] (generic_delete_inode+0x0/0x104) from [] (generic_drop_inode+0x15c/0x180)
r5:00000000 r4:c7843ce0
[] (generic_drop_inode+0x0/0x180) from [] (iput+0x84/0xa4)
r6:c7fff440 r5:00000000 r4:c7843ce0
[] (iput+0x0/0xa4) from [] (xlog_recover_process_iunlinks+0x3c4/0x3ec)
r4:00000000
[] (xlog_recover_process_iunlinks+0x0/0x3ec) from [] (xlog_recover_finish+0xa8/0xb8)
[] (xlog_recover_finish+0x0/0xb8) from [] (xfs_log_mount_finish+0x38/0x3c)
r5:00000000 r4:00000400
[] (xfs_log_mount_finish+0x0/0x3c) from [] (xfs_mountfs+0x9d4/0xbfc)
r7:c7843e20 r6:00000000 r5:00000000 r4:00000000
[] (xfs_mountfs+0x0/0xbfc) from [] (xfs_ioinit+0x14/0x18)
[] (xfs_ioinit+0x0/0x18) from [] (xfs_mount+0x348/0x390)
[] (xfs_mount+0x0/0x390) from [] (xfs_fs_fill_super+0xc4/0x230)
[] (xfs_fs_fill_super+0x0/0x230) from [] (get_sb_bdev+0x14c/0x17c)
[] (get_sb_bdev+0x0/0x17c) from [] (xfs_fs_get_sb+0x20/0x2c)
[] (xfs_fs_get_sb+0x0/0x2c) from [] (vfs_kern_mount+0xac/0x134)
r4:c7db9000
[] (vfs_kern_mount+0x0/0x134) from [] (do_kern_mount+0x40/0xe0)
[] (do_kern_mount+0x0/0xe0) from [] (do_mount+0x550/0x654)
r8:00000000 r7:c7da6000 r6:c7f9a000 r5:c7d96000 r4:00000000
[] (do_mount+0x0/0x654) from [] (sys_mount+0x8c/0xd4)
[] (sys_mount+0x0/0xd4) from [] (ret_fast_syscall+0x0/0x2c)
r7:00000015 r6:be9e6b34 r5:0001d148 r4:0001d198
Filesystem “md2”: XFS internal error xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c. Caller 0xc015c998
[] (dump_stack+0x0/0x14) from [] (xfs_error_report+0x54/0x64)
[] (xfs_error_report+0x0/0x64) from [] (xfs_trans_cancel+0x108/0x130)
r4:00448000
[] (xfs_trans_cancel+0x0/0x130) from [] (xfs_inactive+0x458/0x4a4)
r8:00000000 r7:00000000 r6:c7844c60 r5:00000000 r4:00000004
[] (xfs_inactive+0x0/0x4a4) from [] (xfs_fs_clear_inode+0x58/0x90)
[] (xfs_fs_clear_inode+0x0/0x90) from [] (clear_inode+0x64/0x110)
r6:c7fff440 r5:c7843ce0 r4:c7843ce0
[] (clear_inode+0x0/0x110) from [] (generic_delete_inode+0xec/0x104)
r4:00000000
[] (generic_delete_inode+0x0/0x104) from [] (generic_drop_inode+0x15c/0x180)
r5:00000000 r4:c7843ce0
[] (generic_drop_inode+0x0/0x180) from [] (iput+0x84/0xa4)
r6:c7fff440 r5:00000000 r4:c7843ce0
[] (iput+0x0/0xa4) from [] (xlog_recover_process_iunlinks+0x3c4/0x3ec)
r4:00000000
[] (xlog_recover_process_iunlinks+0x0/0x3ec) from [] (xlog_recover_finish+0xa8/0xb8)
[] (xlog_recover_finish+0x0/0xb8) from [] (xfs_log_mount_finish+0x38/0x3c)
r5:00000000 r4:00000400
[] (xfs_log_mount_finish+0x0/0x3c) from [] (xfs_mountfs+0x9d4/0xbfc)
r7:c7843e20 r6:00000000 r5:00000000 r4:00000000
[] (xfs_mountfs+0x0/0xbfc) from [] (xfs_ioinit+0x14/0x18)
[] (xfs_ioinit+0x0/0x18) from [] (xfs_mount+0x348/0x390)
[] (xfs_mount+0x0/0x390) from [] (xfs_fs_fill_super+0xc4/0x230)
[] (xfs_fs_fill_super+0x0/0x230) from [] (get_sb_bdev+0x14c/0x17c)
[] (get_sb_bdev+0x0/0x17c) from [] (xfs_fs_get_sb+0x20/0x2c)
[] (xfs_fs_get_sb+0x0/0x2c) from [] (vfs_kern_mount+0xac/0x134)
r4:c7db9000
[] (vfs_kern_mount+0x0/0x134) from [] (do_kern_mount+0x40/0xe0)
[] (do_kern_mount+0x0/0xe0) from [] (do_mount+0x550/0x654)
r8:00000000 r7:c7da6000 r6:c7f9a000 r5:c7d96000 r4:00000000
[] (do_mount+0x0/0x654) from [] (sys_mount+0x8c/0xd4)
[] (sys_mount+0x0/0xd4) from [] (ret_fast_syscall+0x0/0x2c)
r7:00000015 r6:be9e6b34 r5:0001d148 r4:0001d198
xfs_force_shutdown(md2,0x8) called from line 1164 of file fs/xfs/xfs_trans.c. Return address = 0xc0151038
Filesystem “md2”: Corruption of in-memory data detected. Shutting down filesystem: md2
Please umount the filesystem, and rectify the problem(s)
Ending XFS recovery on filesystem: md2 (logdev: internal)
Filesystem “md2”: Failed to initialize disk quotas.
oxnas_wd810_leds_state state=13
oxnas_wd810_leds_state state=1
oxnas_wd810_leds_state state=14
oxnas_wd810_leds_state state=1
oxnas_wd810_leds_state state=1
oxnas_wd810_leds_state state=1
xfs_force_shutdown(md2,0x1) called from line 420 of file fs/xfs/xfs_rw.c. Return address = 0xc015d150
oxnas_wd810_leds_state state=0
oxnas_wd810_leds_state state=1
oxnas_wd810_leds_state state=1
oxnas_wd810_leds_state state=1
I’m not a linux expert and i don’‘t really want to mess up the datas because i really don’t know what i’'m doing here so i you could be more precisely about the commands line step by step!
I will be very graceful to you !