RAID 1 is built, but drive is "faulty spare" even though SMART is OK

Hello everyone.

I am aware that this forum is more or less flooded with RAID issues, but my problem is different from the others. FYI, I am using two 4 TB Western Digital Blue (so, like the Green series in the past) and I am not a programmer. But I feel quite comfortable with command line and proficient with Google. So before posting here I spent quite a few hours trying to get it to work by myself.

Here is what I did: I created a RAID 1 over sda1 and sdb1 via using

mdadm -Cv /dev/md0 -l1 -n2 /dev/sda1 /dev/sdb1

I left the build running and when I returned home md0 was nowhere to be found. So I restarted the Turris and used

mdadm --assemble --scan

to assemble the array. It creates md0 as expected, starts a resync but after a while it shows the drive sdb1 as “faulty spare” when checking the array via

mdadm --detail /dev/md0

This would be all OK, if it were not a brand new drive and SMART tells me it’s OK when running:

smartctl -H /dev/sdb

Anyone have an idea what I am doing wrong? I read somewhere that sometimes consumer level drives try to reread faulty sectors more times than enterprice level disks and this can cause the RAID to fail.

Another problem I have is that I cannot mount the RAID as “mount” tells me it “can’t find /dev/md0 in /etc/fstab”

Edit: OK, after removing the RAID and starting from the start I now know that it starts syncing after creating it but interrupts it after a short period of time. During that time, the finish time from

cat /proc/mdstat

is ever increasing.

So it seems like he is unable to sync the two drives, for whichever reason.

This is exactly what ive encounter too here.
And others here

1 Like

Hi,
I’m having the same problems. Try to make the the two gpt partitions for the raid with gdisk. Saddly gdisk is not available in turris or openwrt so do it with a computer.
For the boot add to /etc/rc.local
mdadm --assemble --scan
and if using lvm
vgscan
vgchange -ay

Seems to work now. I’m testing

1 Like

Seems I am not as proficient in Google as I thought. Thanks guys, will give it a try and keep an eye on the threads.

For the GPT partitions I already used my computer and Gparted Live, after figuring out why fdisk would only create partitions with 2 TB size max.

I am reporting the same issue with mdadm and raid1. I am currently using BTRFS, uploaded aprox. 600 GB of data from external USB disk, did btrfs scrub and btrfs check and it looks OK so far.

1 Like

mdadm builtin in pc assembles ok in omnia. Theory is that sata card doesn’t have enough bandwidth over miniPCIe that causes mdadm create to fail.

You can create Get in frisk but you must change Tarlton table by g param and them sweater partitions.

Beware that I’m no programmer. Therefore the second paragraph is only gibberish to me :frowning:
I am currently looking up guides on how to create BTRFS RAID 1.

Seems like the creation is simple, but I read that you have to add the drives in fstab if they should be added during boot. Then, in the OpenWRT Wiki I read:

BTRFS, JFS, UBI, XFS and potentially other (F2FS…) are not supported in /etc/config/fstab. Use manual scripting.

Dunno what that means…

Sorry. Second paragraph is gibberish created by autocorrect and me not checking it before postig. Basically it says you can create GPT in fdisk by g parameter.

BTRFS, JFS, UBI, XFS and ntfs can’t be mounted through fstab Luci config. I solved this problem by creating rc script.

Those are executable files in /etc/init.d with predefined structure. You can create for example /etc/init.d/btrfs-mount

It can look something like this:

#!/bin/sh /etc/rc.common

 START=45
 STOP=50

 start() {                            
 	echo "Mounting fs"
 	/bin/mount -t btrfs /dev/md0 /mnt/nas
}

stop() {
 	echo "Umouting fs"
	/bin/umount /mnt/nas
 }

Then you make it executable:

chmod +x /etc/init.d/btrfs-mount

And than enable it.

/etc/init.d/btrfs-mount enable

You can also enable it in Luci system > startup? (not sure how it is called in English version) section

Now it will run every time you start router and every time you shutdown or reboot route.

1 Like

Thank you very much. I will give it a try.

€dit: Interesting. The raid is working and mounting only one of the two partitions within the raid apparently mounts both. At least the data is being written to both drives.

Furthermore, I do not seem to have an issue with mounting the FS at boot. I only added one of the two partitions in LuCI but the data appears to be written to both. When I use

btrfs filesystem show

both partitions have the same used capacity. Am I misinterpreting something?