Rolf, here's what I know about ZFS. Full disclosure, I'm not a file system expert and some of the things I write might be inexact (at best), but I've worked for 2 years in a Solaris environment running ZFS and have had the chance of participating in a migration to Linux project where we had to look for alternatives. I also want to preface it by saying that
absolutely loved working with ZFS and I'm glad it's getting support outside the Solaris world.
Presentation of ZFS
ZFS is an old project. Well, it's 2005 old. From what I understand the project originally aimed at supporting very large files and very large number of files. Since then other filesystems have caught up in file size but ZFS have gained a lot of interesting features.
Now it's important to note that most (all?) of these features can be found in other alternative projects, but it's definitely nice to have them packaged in a single project. Here are some of the coolest features:
- Data Integrity: Aggressive checksum allows to detect corruption early and in some cases allows for recovery. It's crucial to note that alternative file systems (like ext4 and XFS have similar mechanism to combat corruption).
- Space saving: By using techniques like Copy-On-Write, data compression and Deduplication, you can detect redundant blocks of data and save a lot of space.
- Software Volume management: This is common on virtually every other Operating System through RAID and LVM, but ZFS have packaged this into the file system itself. This is the only visible change users will notice when moving to ZFS.
This is not an exhaustive list, but it's important to realize that the majority of the changes are on the kernel side and you won't feel too much difference as a user. You will have very cool volume management features, but this can be achieved very easily with other file systems with
Software RAID and LVM.
OpenZFS
Originally, ZFS was a proprietary project developed by Sun for Solaris. After it was included in OpenSolaris, there were a lot of efforts to port the projects to different platforms. Apple got involved for a while to port it to OS X. I don't know why they dropped it, but I believe it was around the time they dropped all of their server activity. FreeBSD had one of the first successful port. Linux took time, because of the usual legal license reasons, but now serious port efforts are on the way.
In 2013 all the different port efforts
under a single umbrella project called OpenZFS.
ZFS on the desktop
I mention that I'm skeptic about ZFS on the desktop. The reason is that, in my experience, ZFS is very resource hungry. Especially RAM. All this compression, copy-on-write, constant checksums, and virtual volume management, comes at a cost. Sure, if you're running a high end desktop with over 20GB of memory and several SSDs for caching you could maybe get away with it, but for more standard sized laptops, I would expect to take a performance hit.
On the server it's okay to imagine you have this kind of resources available. On the desktop I feel like you could get by with manual LVM/RAID and maybe try something like btrfs for copy-on-write? Back in 2011 when I worried about this btrfs was not ready and we opted for XFS.
Note that it's been a while since I've played with ZFS on linux. My laptop definitely took a performance hit back then. However I'm confident that the good folks at Canonical have an idea what they're doing, and if they're including it now by default, there's a good chance that things have improved drastically recently.
Conclusion
I'm glad ZFS is getting public exposure like that, and I believe it's a really really good project. However, unless you're actually taking advantage of
zpool and RAIDZ, I still believe you're better off using our traditional Linux file systems.