Search the Archives Search for:. Hey, it's your data! Just letting you know that RackTop uses cookies to improve your experience. If you continue to use this site, you consent to the terms of our privacy policies. Policies, Warranty and Maintenance. When ZFS was created, it was designed to be the last word in file systems.
At a time when most file systems where bit, the ZFS creators decided to jump right to bit to future proof it. If you want to try ZFS on Linux, you can use it as your storage file system. Recently, Ubuntu Read more about using ZFS on Ubuntu. This article has sung the benefits of ZFS.
Now let me tell you a quick problem with ZFS. Using RAID-Z can be expensive because of the number of drives you need to purchase to add storage space. If you found this article interesting, please take a minute to share it on social media, Hacker News or Reddit. My name is John Paul Wohlscheid. I'm an aspiring mystery writer who loves to play with technology, especially Linux.
You can catch up with me at my personal website. You can check out my ebooks here. I also write a newsletter about the stuff that most history books miss. Check it out. I just went through a night mare with LVM2 shared from a Linux server to mac books. It would not create the full size of the entire storage pool i. I installed ZFS and configuration was a snap, the entire 7TB was configure without a problem and performance surpasswd LVM2 by copying the same amount of data in just 10 minutes.
This sowed further doubt about the future of ZFS, since Oracle did not enjoy wide support from open source advocates. Many remain skeptical of deduplication, which hogs expensive RAM in the best-case scenario. There are only three ways to expand a storage pool:. So what are your other choices? Linux has a few decent volume managers and filesystems, and most folks use a combination of LVM or MD and ext4.
The study talks about 'checksum errors' and 'identity discrepancies'. It seems that the problem lies with 'identity discrepancies'. Such errors would cause silent data corruption. The 'identity discrepancies' are events where - for example - a sector ends up at the wrong spot on the drive, so the sector itself is ok, but the file is still corrupt. That would be a true example of silent data corruption.
And ZFS would protect against this risk. Of the 1. I have difficulty to determine how many of those drives are SATA drives. Remember that this is a worst-case scenario.
To me, this risk seems rather small. If I don't mess up the statistics, you should have a thousand hard drives running for 17 months for a single instance of silent data corruption to show up. So unless you're operating at that scale, I would say that silent data corruption is indeed not a risk a DIY home user should worry about.
Let's take the example of RAID 5. Your NAS can survive a single drive failure and one drive fails. At some point you replace the failed drive and the RAID array starts the rebuild process. During this rebuild process the array is not protected against drive failure so no additional drives should fail. If a second drive would encounter a bad sector or what people today call an Unrecoverable Read Error during this rebuild process, in most cases the RAID solution would give up on the entire array.
A single 'bad sector' may have the same impact as a second drive failure.
0コメント