A Complete Guide to FreeNAS Hardware Design, Part II: Hardware Specifics

Written by Joshua Paetzel on .

General Hardware Recommendations

I’ve built a lot of ZFS storage hardware and have two decades of experience with FreeBSD. The following are some thoughts on hardware.


Intel Versus AMD

FreeNAS is based on FreeBSD. FreeBSD has a long history of working better on Intel than AMD. Things like (but not limited to) the watchdog controllers, USB controllers, and temperature monitoring all have a better chance of being well supported when they are on an Intel platform. This is not to say that AMD platforms won’t work, that there aren’t AMD platforms that work flawlessly with FreeNAS, or even that there aren’t Intel platforms that are poor choices for FreeNAS, but all things being equal, you’ll have better luck with Intel than AMD.

The Intel Avoton platforms are spendy but attractive: ECC support, low power, AES-NI support (a huge boon for encrypted pools). On the desktop side of things, there are Core i3 platforms with ECC support, and of course there are many options in the server arena. The single socket E3 Xeons are popular in the community, and of course for higher end systems, the dual package Xeon platforms are well supported.

Storage Controllers

LSI is the best game in town for add-on storage controllers. Avoid their MegaRAID solutions and stick with their HBAs. You’ll see three generations of HBAs commonly available today. The oldest (and slowest) are the SAS 2008 based I/O controllers such as the 9211 or the very popular IBM M1015. The next generation of these controllers was based on the 2308 which added PCI 3.0 support and increased CPU horsepower on the controller itself. An example here is the 9207. Both the 2008 and 2308 based solutions are 6Gbps SAS parts. The newest generation of controllers are 12Gbps parts such as the 9300. The FreeNAS driver for the 6 Gbps parts is based on version 16 of the stock LSI driver with many enhancements that LSI never incorporated into their driver. In addition, many of the changes after version 16 were specifically targeted at the Integrated RAID functionality that can be flashed onto these cards. As a result, “upgrading” the driver manually to the newer versions found on the LSI website can actually result in downgrading its reliability or performance. I highly recommend running version 16 firmware on these cards. It’s the configuration tested by LSI, and it’s the configuration tested by the FreeNAS developers. Running newer firmware should work, however running older firmware is not recommended or supported as there are known flaws that can occur by running the FreeNAS driver against a controller with an older firmware. FreeNAS will warn you if the firmware on an HBA is incompatible with the driver. Heed this warning or data loss can occur. The newer 12Gbps parts use version 5 of the LSI driver. Cards using this driver should use version 5 of the firmware.

Most motherboards have some number of SATA ports built in. There are certain models of Marvell and J-Micron controllers that are used on motherboards that have large numbers of SATA ports. Some of these controllers have various compatibility issues with FreeNAS, and some of these controllers also have forms of RAID on them. As a general rule, the integrated chipset AHCI SATA ports have no issues when used with FreeNAS, they just tend to be limited to 10 ports (and often far fewer) on most motherboards.

Hard Drives

Desktop drives should be avoided whenever possible. In a desktop, if an I/O fails, all is lost. For this reason, desktop drives will retry I/Os endlessly. In a storage device, you want redundancy at the storage level. If an individual drive fails an I/O, ZFS will retry the I/O on a different drive. The faster that happens, the faster the array will be able to cope with hardware faults. For larger arrays, desktop drives (yes, I’ve seen attempts to built 1PB arrays with ZFS and desktop drives) are simply not usable in many cases. For small to medium size arrays, a number of manufacturers produce a “NAS” hard drive that is rated for arrays of modest size (typically 6-8 drives or so). These drives are worth the additional cost.

At the high end, if you are building an array with SAS controllers and expanders, consider getting the nearline 7200 RPM SAS drives. These drives are a very small premium over Enterprise SATA drives. However, running SATA drives in SAS expanders –while supported– is a less desirable configuration than using SAS end to end due to the difficulty of translating SATA errors across the SAS bus.

Josh Paetzel
iXsystems Director of IT

<< Part 1/4 of A Complete Guide to FreeNAS Hardware Design, Purpose and Best Practices

Part 3/4 of A Complete Guide to FreeNAS Hardware Design, Pools, Performance, and Cache >>

A Complete Guide to FreeNAS Hardware Design, Part I: Purpose and Best Practices

Written by Joshua Paetzel on .

A guide to selecting and building FreeNAS hardware, written by the FreeNAS Team, is long past overdue by now. For that, we apologize. The issue was the depth and complexity of the subject, as you’ll see by the extensive nature of this four part guide, due to the variety of ways FreeNAS can be utilized. There is no “one-size-fits-all” hardware recipe. Instead, there is a wealth of hardware available, with various levels of compatibility with FreeNAS, and there are many things to take into account beyond the basic components, from use case and application to performance, reliability, redundancy, capacity, budget, need for support, etc. This document draws on years of experience with FreeNAS, ZFS, and the OS that lives underneath FreeNAS, FreeBSD. Its purpose is to give guidance on intelligently selecting hardware for use with the FreeNAS storage operating system, taking the complexity of its myriad uses into account, as well as providing some insight into both pathological and optimal configurations for ZFS and FreeNAS. freenashome

A word about software defined storage:

FreeNAS is an implementation of Software Defined Storage; although software and hardware are both required to create a functional system, they are decoupled from one another. We develop and provide the software and leave the hardware selection to the user. Implied in this model is the fact that there are a lot of moving pieces in a storage device (figuratively, not literally). Although these parts are all supposed to work together, the reality is that all parts have firmware, many devices require drivers, and the potential for there to be subtle (or gross) incompatibilities is always present.

Best Practices

ECC RAM or Not?

This is probably the most contested issue surrounding ZFS (the filesystem that FreeNAS uses to store your data) today. I’ve run ZFS with ECC RAM and I’ve run it without. I’ve been involved in the FreeNAS community for many years and have seen people argue that ECC is required and others argue that it is a pointless waste of money. ZFS does something no other filesystem you’ll have available to you does: it checksums your data, and it checksums the metadata used by ZFS, and it checksums the checksums. If your data is corrupted in memory before it is written, ZFS will happily write (and checksum) the corrupted data. Additionally, ZFS has no pre-mount consistency checker or tool that can repair filesystem damage. This is very nice when dealing with large storage arrays as a 64TB pool can be mounted in seconds, even after a bad shutdown. However if a non-ECC memory module goes haywire, it can cause irreparable damage to your ZFS pool that can cause complete loss of the storage. For this reason, I highly recommend the use of ECC RAM with “mission-critical” ZFS. Systems with ECC RAM will correct single bit errors on the fly, and will halt the system before they can do any damage to the array if multiple bit errors are detected. If it’s imperative that your ZFS based system must always be available, ECC RAM is a requirement. If it’s only some level of annoying (slightly, moderately…) that you need to restore your ZFS system from backups, non-ECC RAM will fit the bill.

How Much RAM is needed?

FreeNAS requires 8 GB of RAM for the base configuration. If you are using plugins and/or jails, 12 GB is a better starting point. There’s a lot of advice about how RAM hungry ZFS is, how it requires massive amounts of RAM, an oft quoted number is 1GB RAM per TB of storage. The reality is, it’s complicated. ZFS does require a base level of RAM to be stable, and the amount of RAM it needs to be stable does grow with the size of the storage. 8GB of RAM will get you through the 24TB range. Beyond that 16GB is a safer minimum, and once you get past 100TB of storage, 32GB is recommended. However, that’s just to satisfy the stability side of things. ZFS performance lives and dies by its caching. There are no good guidelines for how much cache a given storage size with a given number of simultaneous users will need. You can have a 2TB array with 3 users that needs 1GB of cache, and a 500TB array with 50 users that need 8GB of cache. Neither of those scenarios are likely, but they are possible. The optimal cache size for an array tends to increase with the size of the array, but outside of that guidance, the only thing we can recommend is to measure and observe as you go. FreeNAS includes tools in the GUI and the command line to see cache utilization. If your cache hit ratio is below 90%, you will see performance improvements by adding cache to the system in the form of RAM or SSD L2ARC (dedicated read cache devices in the pool).

RAID vs. Host Bus Adapters (HBAs)

ZFS wants direct control of the underlying storage that it is putting your data on. Nothing will make ZFS more unstable than something manipulating bits underneath ZFS. Therefore, connecting your drives to an HBA or directly to the ports on the motherboard is preferable to using a RAID controller; fortunately, HBAs are cheaper than RAID controllers to boot! If you must use a RAID controller, disable all write caching on it and disable all consistency checks. If the RAID controller has a passthrough or JBOD mode, use it. RAID controllers will complicate disk replacement and improperly configuring them can jeopardize the integrity of your volume (Using the write cache on a RAID controller is an almost sure-fire way to cause data loss with ZFS, to the tune of losing the entire pool).

Virtualization vs. Bare Metal

FreeBSD (the underlying OS of FreeNAS) is not the best virtualization guest: it lacks some virtio drivers, it lacks some OS features that make it a better behaved guest, and most importantly, it lacks full support from some virtualization vendors. In addition, ZFS wants direct access to your storage hardware. Many virtualization solutions only support hardware RAID locally (I’m looking at you, VMware) thus leading to enabling a worst case scenario of passing through a virtual disk on a datastore backed by a hardware RAID controller to a VM running FreeNAS. This puts two layers between ZFS and your data, one for the Host Virtualization’s filesystem on the datastore and another on the RAID controller. If you can do PCI passthrough of an HBA to a FreeNAS VM, and get all the moving pieces to work properly, you can successfully virtualize FreeNAS. We even include the guest VM tools in FreeNAS for VMware, mainly because we use VMware to do a lot of FreeNAS development. However if you have problems, there are no developer assets running FreeNAS as a production VM and help will be hard to come by. For this reason, I highly recommend that FreeNAS be run “On the Metal” as the only OS on dedicated hardware.

Josh Paetzel
iXsystems Director of IT

Part 2/4 of A Complete Guide to FreeNAS Hardware Design: Hardware Specifics >>

FreeNAS vs TrueNAS

Written by Brett Davis on .

Join us for a free webinar with iXsystems Co-Founder, Matt Olander to learn more about why businesses everywhere are replacing their legacy storage with TrueNAS.

“What’s the difference between TrueNAS and FreeNAS? Is TrueNAS just FreeNAS installed on a server?” If you look at the software feature list, there aren’t a ton of differences. So really….what’s the difference?

  1. The first difference is the software delivery method: TrueNAS is a purpose-built storage appliance while FreeNAS is freely-downloadable software that requires the user to understand storage well enough to select the correct hardware that is appropriate for their application.
  2. TrueNAS is commercially-supported, while FreeNAS is community-supported.
  3. There are performance and usability optimizations in TrueNAS that are specific to the hardware we use and therefore aren’t included with FreeNAS.
  4. High-Availability (failover) is hardware-dependent and only available in TrueNAS.

But, perhaps more critical to understand than the “what” is the “why”:


We make FreeNAS for when storage is non-critical.

There are certainly many storage applications that don’t require professional support. Applications like home storage, simple office file servers, tertiary backups, home streaming media servers, scratch space, storage experimentation, or any other application where data is fungible; FreeNAS can be the perfect solution for all of them.

We make TrueNAS for when storage is critical.

Storage downtime can equal an instant loss of revenue, making reliable storage a painstaking process — a process that requires careful consideration, deep hardware and storage knowledge, and countless hours of testing — certainly eons more difficult than the Software Defined Storage crowd would want you to believe. It took us nearly two years to select, design, test, and qualify the myriad hardware components that go into TrueNAS, which is a purpose-built appliance — meaning software coupled with custom hardware — designed for its one specific application: critical storage. Compared to a user-built system that your software vendor knows nothing about, the appliance platform is inherently easier to support when things don’t go your way, because your software vendor is your hardware vendor as well. And, when storage is this important to your business, it’s imperative to have a Support Team at arm’s length who can resolve any issue that may arise without having to first wrap their heads around the hardware platform you’ve built.

We make FreeNAS for Open Source flexibility.

For those that have the expertise and the spare time to build and support their own solutions, or for those that want to tinker and learn about storage, FreeNAS is freely-available and unencumbered by license restrictions. The FreeNAS Project has a mature community and a team of developers dedicated to providing the best (open-source) software defined network file storage solution in the world. All we ask in return is that you enjoy the software and contribute when and where you can, which can be as simple as providing feedback, filing bugs, and making feature requests, or as involved as helping us write code.

We make TrueNAS for enterprise stability.

Where FreeNAS is the bleeding edge, TrueNAS is the stable handle. FreeNAS is where technologies are tested and refined; therefore the software undergoes an often rapid and frequent release cycle. TrueNAS, by contrast, contains only the most stable and vetted code, keeping software updates to a minimum and the release cycle methodical.

We make FreeNAS for people who want to “DIY”

Some folks like to do it themselves. Some folks only get satisfaction when building things on their own. Some folks don’t mind downtime when there’s an issue and enjoy perusing the FreeNAS forums for help. Some folks have limited budgets yet still want powerful storage software. And, some folks are storage experts themselves. You’re welcome, guys :)

We make TrueNAS because businesses don’t want to “DIY”

Instead of buying a fleet of delivery trucks, I suppose we could purchase all the components separately, build the trucks ourselves, and fix them when things break. But, we’re not a car dealership, we’re a storage company. We’d probably save money up front on the cost of the bare parts but would certainly come out way behind with the time spent figuring out how to put them all together and build a functioning car, let alone the costs to maintain it! Most businesses don’t have the time, available hardware, or internal support expertise for a do-it- yourself storage solution — they’re busy focused on their own missions and business models. But, with a 100% software solution, you must build the server yourself. If there is a problem with the server hardware, you can’t look to the software vendor for support, and vice-versa if you have hardware problems. With TrueNAS, you get one throat to choke….ours :)

We make FreeNAS because many are turning to virtualization.

FreeNAS is known to work well with all major virtualization platforms, but due to the nature of the decoupled hardware, we aren’t able to officially certify the software with the virtualization vendors. Therefore, if something goes haywire, the user cannot turn to the virtualization vendor for assistance and instead must rely on the FreeNAS community.

We make TrueNAS because many are turning to virtualization…and need Support.

With a software-only solution you must verify that every component is on the virtualization vendors’ compatibility list and when your configuration changes (such as upgrading to a new network card) you need to validate the configuration again. Most businesses can’t afford the risk, so TrueNAS is officially certified to support Citrix XenServer, VMware ESXi, and Microsoft Hyper-V.

FreeNAS and TrueNAS both have their rightful places.

FreeNAS is the world’s most popular software defined storage OS, with more downloads and installs than any other storage software on the planet. The sheer magnitude of interest speaks volumes about its myriad applications. And, as its enterprise counterpart, TrueNAS has the performance, high-availability, functionality, and professional software support that mission-critical storage applications require.

Brett Davis
iXsystems Executive Vice President