2 Volume in single EX4

Hi All,

I would like to clarify the RAID setup in My Cloud EX4.

All the 4 slot is occupied with 2TB hard drive. From the RAID profile I can see the following:

Volume_1 | RAID 1
Volume_2 | RAID 1

  1. Does that mean Volume_1 = Slot 1 and Slot 2, and Volume_2 = Slot 3 and Slot 4?

  2. Is Volume_2 mirrored of Volume_1? Which mean I cannot take out the hard drive on Slot 3 and Slot 4?

Thanks,
CL

As far as which volumes are which drives, it depends on how you configured it. But I can tell you that volume 2 is NOT a mirror of 1.

Hi,

  1. The RAID was setup many years back, I don’t have the info of which drive goes to which Volume. Is there any way to check on this?

  2. Currently my Drive 2 is now in “Bad” state, and my Volume_1 is in “Degraded” state, so I assume Drive 1+2 = Volume_1? And Drive 3 + 4 = Volume_2? And from what I understand is totally 2 separate Volume, not related?

  3. All my files are created under Volume_1, I don’t have files in Volume_2. I plan to replace the faulty Drive 2 with one of the Drive in Volume_2 (Drive 3/4). Which is why the initial question posted whether Volume_2 is actually the mirrored of Voume_1. (Western Digital Engineer told me IT IS, and advised me NOT TO format the Volume_2. Else it break the RAID 1 mirror)

  4. I intend to Format my Volume_2 through Setting>Utilities and use one of the Drive in Volume_2 to swap out the faulty Drive 2. Will there be any concern of this action?

Thanks in advanced!

Log in via ssh and post the output of the commands

cat /proc/mdstat

And

mount

Hi,

Output:

cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sdc2[0] sdd2[1]
1949319032 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 131072KB chunk

md1 : active raid1 sdb21 sda2[0]
1949319032 blocks super 1.0 [2/1] [U_]
bitmap: 1/1 pages [32KB], 131072KB chunk

md0 : active raid1 sdd1[3] sdc1[2] sdb14 sda1[0]
2097088 blocks [4/3] [U_UU]
bitmap: 8/16 pages [256KB], 8KB chunk

unused devices:

mount

%root% on / type unknown (rw)
proc on /proc type proc (rw)
/dev/ram0 on / type ext2 (rw)
sysfs on /sys type sysfs (defaults)
mdev on /dev type tmpfs (defaults)
proc on /proc type proc (0)
cgroup_root on /cgroup type tmpfs (rw,nosuid,nodev,noexec)
memory on /cgroup/memory type cgroup (memory,rw,nosuid,nodev,noexec)
ubi0:config on /usr/local/config type ubifs (0)
squash on /usr/local/tmp type ramfs (rw,size=105m)
/usr/local/tmp/image.cfs on /usr/local/modules type squashfs (rw,loop=/dev/loop0 )
tmpfs on /mnt type tmpfs (size=1m,nr_inodes=0)
tmpfs on /var/log type tmpfs (size=40m,nr_inodes=0)
tmpfs on /tmp type tmpfs (size=100m,nr_inodes=20000)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/HD_a4 type ext4 (rw,noatime,nodiratime,usrquota,grpquota)
/dev/sdb4 on /mnt/HD_b4 type ext4 (rw,noatime,nodiratime,usrquota,grpquota)
/dev/sdc4 on /mnt/HD_c4 type ext4 (rw,noatime,nodiratime,usrquota,grpquota)
/dev/sdd4 on /mnt/HD_d4 type ext4 (rw,noatime,nodiratime,usrquota,grpquota)
/dev/md1 on /mnt/HD/HD_a2 type ext4 (rw,noatime,nodiratime,usrquota,grpquota)
/dev/md2 on /mnt/HD/HD_b2 type ext4 (rw,noatime,nodiratime,usrquota,grpquota)

Ok, that indicates that group “md1” is indeed made up of drives 1 & 2, and “md2” is made of drives 3 & 4.

Further, it shows that group “md1” is mounted as “/mnt/HD/HD_a2” and “md2” is mounted as …“b2.”

So now to confirm that Volume 1 is which one, from the SSH login, do

cd /shares/Volume_1

if yours changes the prompt to something like this:

root@WDMyCloudEX4 HD_a2#

… that confirms that a2 (and thus md1) is indeed Volume 1.

Hi,

Thanks for the info, appreciate it.

I swap Drive 3 with Drive 2, and run a manual rebuild. But I notice the RAID 1 of Volume_1 doesn’t rebuild based on Drive 1+2. It seems Volume_1 follow the Faulty Drive (which is under Slot 3 now).

The Volume_1 now is consisting Drive 1 + 3. Am I reading this correctly? Is there anyway I can rebuilt Volume_1 with a Volume_2 Drive?

cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sdb2[0] sdd2[1]
1949319032 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 131072KB chunk

md1 : active raid1 sda2[0] sdc2[1]
1949319032 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 131072KB chunk

md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
2097088 blocks [4/4] [UUUU]
bitmap: 0/16 pages [0KB], 8KB chunk