SATA HDD issues

How did you achieve to automatically assemble array after reboot of Omnia?

Thanks for hint. I decided to use 2 1TB drives I have in gaming PC for setup and testing and when there wont be enough space, then buy a new one :slight_smile:

I have bought just recently a WD RED 2 TB in my own NAS (not Omnia Nas, but own build).

I have been using WD Green 2 TB for the last 5 years almost 24/7. 1 of the 5 HDā€™s died (he will be missed). Then i came to know about vibrations. Especially in a NAS, when they are on top of each other. The vibrations cause hard-disk failure. So the place of the 5th one i replaced it with the WD RED 2 TB which are specially made for NAS systems against vibrations. Till now i it works great.

Although planning to buy HGST in the future, their biggest NAS drive is sadly still 6 TB. My goal is having 10 TB per disk :). 5 Disks = 50 TB in RAID 6 = 30 TB. I guess that should be enough :).

What are you planning to put on those HDā€™s? Also using RAID? Or just both JBOD?

Is there some easy step-by-step list how the setup RAID1 using MDADM? Iā€™ve found this which looks like perhaps useful for Omnia too: http://unix.stackexchange.com/questions/199101/debian-mounting-a-raid-array.

Any advices from those who have RAID1 on theis NAS boxes up & running already?

Thank you.

I have the same issue. I have two 3TB WD Red disks and after creation of RAID 1 array resync speed goes slowly down with IO error after while :confused: Please add official RAID tutorial. Without RAID is NAS perk uselessā€¦

After mdadm problems I decided to use btrfs. It worksā€¦
mkfs.btrfs -m raid1 -d raid1 -f /dev/sda1 /dev/sdb1

IMHO using btrfs just hides the problem as it does not trigger that high load on disks during the setup. But the error does appear as well if you transfer lot of data to btrfs raid.

1 Like

Does anyone tried different SATA controller?
Or are we all using that supplied one?

I think i it is this one ASM1061 http://www.asmedia.com.tw/eng/e_show_products.php?item=118 (A3 revision).

maybe upgrading the firmware in ASM1061 could improve performance.
Iā€™ve saw this approach on a different board (mqmaker witi).
But unfortunately this can only be done by unsoldering the chip or flashing with a pomona clip.
http://files.homepagemodules.de/b602300/f13t239p31942n2_yfglaZst.zip
Warning ! you do this on your own risk !!!

I remember from my wild youngsta days something about soldering and working in Eagle/OrCad, there might be even some props and pcbs in my childhood-treasure-case ;-).
Nowadays, thatā€™s too hardcore for me. I can live with single two single drives prepared aside. (ā€¦ehm still there is LVM, which i did not try to play with, will see after i will get back repaired drive)

Instead of soldering and such, there might be available different controller (not checked yet, ā€¦ltgfm)

Iā€™m having the same problem as the original user. I have created two filesystems, one on each disk, and filling as fast as I can. I have not been able to trigger the problem that way yet. Itā€™s sad that mdadm seems to not work, or that it triggers some behaviour in the SATA card. I might try BTRFS and see how that goes.

Same issue here. I was not able to automatically assemble md array after reboot. Always some manual intervention were needed, so I choose btrfs way. I had 1x 3TB WD RED full of data. So I ordered another new one, put it to omnia, create btrfs there and move all data there. Then I put a original disk to omnia and let btrfs balance to create raid1 array.

After aprox 2TB of synchronization (~20hours), operation failed and I saw in dmesg same errors as author of this thread.
I had to reboot omnia to get filesystem mounted as ā€œrwā€. Nice feature of btrfs is the balance command can continue from previous run (even failed one). But during synchronization of the rest of data (~0.4TB), operation failed 5-times.
(it failed even 10 minutes of balancing - before there were 2 hours of pause to cool down)

Now I had an array fully synchronized, but as I was not happy about situation above, I tried some another tests. I generate md5sum for each file on my raid1 array. This time, array failed after ~1TB of data.

No errors on s.m.a.r.t.

1 Like

Iā€™m copying to here what I wrote to other thread so people can comment on it.

My main theory: There is no enough bandwidth for two high speed SATA connections at the same time and the first connection starves the second one sooner or later.

MiniPCIe has max speed of 2.5 Gbps (more precisely 2.5 gigatransfers per second) and two SATA1 interfaces are 3 Gbps together, two SATA2 are 6 Gbps, and two SATA3 are 12 Gbps. All of those are larger than what miniPCIe can provide.

The SATA card in NAS perk is actually a RAID card so it can provide enough bandwidth for two SATA2 or two SATA3 drives if it is used as a such but if you implement RAID in software there is no enough bandwidth for two SATA2 or two SATA3 connections.

I havenā€™t seen RAID Linux driver for the SATA card so if there is no driver for it then the maximum speed is 2.5 Gbps for one drive or 1.25 Gbps for two drives.

And as a workaround you can try if ā€œlibata.force=1.5ā€ or ā€œlibata.force=noncqā€ as a kernel boot parameter helps.

1 Like

The bandwidth theory seams plausible to me. I didnā€™t experience any problems while building mdadm array on two old wd velociraptors. It showed 110 MB while building. But one Seagate nas and one wd red combination it failed. Mdadm showed 160MB. Velociraptor + seagate also failed.
/dev/sdb wasnā€™t responding after failure. Had to reboot to start working again.

Then I tried to build array using my pc and had same problem while connecting both discs to lowest two sata connectors on Asus x99-a motherboard. I am not certain but those are handled by separate controller not by chipset. When I used highest and lowest one array builded ok.
Now it assembles ok in turris. Tried to copy about 1TB of data from old nas and it also ended ok. But the speed was only about 60 MB.

I would like to try suggested boot params but as far as I know you can pass them to kernel in opener only by setting them as a parameter during kernel compilation. I donā€™t have time for it.

Is anyone brave and experienced enough to the kernel parameters mentioned by white ?
I donā€™t have the knowledge to test it myself. But its a shame I cant get it to run via mdadmā€¦

Iā€™ve had my NAS perk for a month now. I just received my router today. After getting it put together with a couple of 4tb Seagate Constellations I attempted to create a raid1 array. I have the same results I see here with the IO error and the failed raid build. This seems to be a fairly old topic over a month. Is anyone from the Turris team going to address this?

Same issue here. Have a twin 4TB WD Red NAS HDD. I have tried multiple options but I always end up in errors. Either /dev/sdb shows up as faulty under an mdadm raid1 or I get the ā€œWarning, could not drop cachesā€ with no volume created after trying to create a BTRFS volume (1 drive /dev/sda) or raid with the two drives.

Can someone from CZ.NIC review this topic and provide a solution for RAID creation using >2TB HDD on the Turris Omnia (NAS)?

I am also interested in some response from CZ.NIC since I tried to use two 2 TB Samsung drives and while building a sw raid with mdadm it got the same result as described above. Too bad that the NAS perk is not working out of the box.

I bought 2 4TB WD Gold drives (WD4002FYYZ) for RAID1, but it fails about 1 minute into syncing between the drives. Also interested in the solution to this.