Restored failed RAID 5 from a DL4100 with a bad motherboard to new PR4100

 I was able to partition the new drive with gdisk.  It's the equivalent of fdisk.  I

got the partition sizes from the other good drives. (They are all the same drive
from WD.) The fourth drive was the drive that needed to be replaced in the
RAID 5.

 The first partition is a swap directory that is combined with the other first 

partitions as RAID 1 (/dev/md0). The gdisk needs to make the partition type
8200 for the first partition, This will allow the NAS to automatically include the
partition for the RAID 1 swap. It’s 2GB in size for the partition.

 The second is the RAID 5 directory (/dev/md1).  The partition type for the 

rest of the drives can be left with the default partition type. It has the rest of
the drive space. There is no need to format this partition. The RAID rebuild
should take care of the format. By default there will be links from
/mnt/HD/HD_a2/Public → /shares/Public,
/mnt/HD/HD_a2/Smartware → /shares/SmartWare ,
/mnt/HD/HD_a2/TimeMachine Backup → /shares/TimeMachine Backup ,
and /mnt/HD/HD_a2/Volume_1 → /shares/Volume_1

 The third partition isn't used?  I think, this is the source of the missing 

space that some posts claim. It’s 1 GB in size for the partition.

 The fourth partition is a ext4 non raid partition that has a database for 

pictures on one of the disk. I have a four disk RAID 5. The 4 partitions
are mount /mnt/HD_a4 , /mnt/HD_b4 , /mnt/HD_c4 , /mnt/HD_d4 . The
photo data is /mnt/HD_a4. I was able to format the fourth partition by
running mkfs.ext4 /dev/sd4. It’s 1 GB in size for the partition. The first
drive partition 4 is generally used and there will be a link from
/mnt/HD_a4 → /shares/.wdphotos.

 I was able to assemble the raid by running mdadm --assemble 

/dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 . To manually
force a rebuild I ran mdadm /dev/md1 —remove /dev/sdd2 (Fourth
disk was the new one for rebuild.) and then mdadm /dev/md1
–add /dev/sdd2 to add it back in as a spare. This kicked in the rebuild
for the RAID 5. I could check by looking at /proc/mdstat for the status
of the rebuild. However, I wasn’t able to get the NAS to do RAID roaming
after resetting the system firmware to default settings. I even did the 40
second reset button on the back and no RAID Roaming. This may have
been due to putting 4 test disks to setup a different working RAID 5 setup.
This was my clue. I found a file named /usr/local/config/hd_confg.xml with
the working RAID 5 info and did not change with the old RAID 5. I updated
the contents of the file with the old RAID info and rebooted. The NAS then
recognized the old RAID 5 and briefly started to rebuild the RAID 5. The
message didn’t stay up. I had already done that step manually from
command line earlier. Anyways it appears that I have NAS fixed from a
bad RAID 5 and a dead DL4100 motherboard to a working PR4100.

I had a DL4100 fail last week and I ordered a PR4100 as a replacement. I believe the volume is a RAID 4 or 5 connected to a server over one of the ethernet ports as an ISCSI volume.

This is the second DL4100 I had that failed. The first one died barely under warranty and then the replacement failed last week. Have the PR4100s been out long enough to get an idea of the failure rate?

If it were up to me, I go a different route than Western Digital this time, but I feel like I’m forced into this if I have any hope of recovering my data.