A Complete Guide to FreeNAS Hardware Design, Part II: Hardware Specifics

General Hardware Recommendations

I’ve built a lot of ZFS storage hardware and have two decades of experience with FreeBSD. The following are some thoughts on hardware.

SONY DSC

Intel Versus AMD

FreeNAS is based on FreeBSD. FreeBSD has a long history of working better on Intel than AMD. Things like (but not limited to) the watchdog controllers, USB controllers, and temperature monitoring all have a better chance of being well supported when they are on an Intel platform. This is not to say that AMD platforms won’t work, that there aren’t AMD platforms that work flawlessly with FreeNAS, or even that there aren’t Intel platforms that are poor choices for FreeNAS, but all things being equal, you’ll have better luck with Intel than AMD.

The Intel Avoton platforms are spendy but attractive: ECC support, low power, AES-NI support (a huge boon for encrypted pools). On the desktop side of things, there are Core i3 platforms with ECC support, and of course there are many options in the server arena. The single socket E3 Xeons are popular in the community, and of course for higher end systems, the dual package Xeon platforms are well supported.

Storage Controllers

LSI is the best game in town for add-on storage controllers. Avoid their MegaRAID solutions and stick with their HBAs. You’ll see three generations of HBAs commonly available today. The oldest (and slowest) are the SAS 2008 based I/O controllers such as the 9211 or the very popular IBM M1015. The next generation of these controllers was based on the 2308 which added PCI 3.0 support and increased CPU horsepower on the controller itself. An example here is the 9207. Both the 2008 and 2308 based solutions are 6Gbps SAS parts. The newest generation of controllers are 12Gbps parts such as the 9300. The FreeNAS driver for the 6 Gbps parts is based on version 16 of the stock LSI driver with many enhancements that LSI never incorporated into their driver. In addition, many of the changes after version 16 were specifically targeted at the Integrated RAID functionality that can be flashed onto these cards. As a result, “upgrading” the driver manually to the newer versions found on the LSI website can actually result in downgrading its reliability or performance. I highly recommend running version 16 firmware on these cards. It’s the configuration tested by LSI, and it’s the configuration tested by the FreeNAS developers. Running newer firmware should work, however running older firmware is not recommended or supported as there are known flaws that can occur by running the FreeNAS driver against a controller with an older firmware. FreeNAS will warn you if the firmware on an HBA is incompatible with the driver. Heed this warning or data loss can occur. The newer 12Gbps parts use version 5 of the LSI driver. Cards using this driver should use version 5 of the firmware.

Most motherboards have some number of SATA ports built in. There are certain models of Marvell and J-Micron controllers that are used on motherboards that have large numbers of SATA ports. Some of these controllers have various compatibility issues with FreeNAS, and some of these controllers also have forms of RAID on them. As a general rule, the integrated chipset AHCI SATA ports have no issues when used with FreeNAS, they just tend to be limited to 10 ports (and often far fewer) on most motherboards.

Hard Drives

Desktop drives should be avoided whenever possible. In a desktop, if an I/O fails, all is lost. For this reason, desktop drives will retry I/Os endlessly. In a storage device, you want redundancy at the storage level. If an individual drive fails an I/O, ZFS will retry the I/O on a different drive. The faster that happens, the faster the array will be able to cope with hardware faults. For larger arrays, desktop drives (yes, I’ve seen attempts to built 1PB arrays with ZFS and desktop drives) are simply not usable in many cases. For small to medium size arrays, a number of manufacturers produce a “NAS” hard drive that is rated for arrays of modest size (typically 6-8 drives or so). These drives are worth the additional cost.

At the high end, if you are building an array with SAS controllers and expanders, consider getting the nearline 7200 RPM SAS drives. These drives are a very small premium over Enterprise SATA drives. However, running SATA drives in SAS expanders –while supported– is a less desirable configuration than using SAS end to end due to the difficulty of translating SATA errors across the SAS bus.

Josh Paetzel
iXsystems Director of IT

<< Part 1/4 of A Complete Guide to FreeNAS Hardware Design, Purpose and Best Practices

Part 3/4 of A Complete Guide to FreeNAS Hardware Design, Pools, Performance, and Cache >>

17 Comments

  1. Mike S

    what is the enclosure featured on this page?

    Reply
  2. Don Jackson

    Are the LSI 12Gbps SAS HBAs supported by FreeNAS?

    Reply
    • Michael Dexter

      The drivers are included but your mileage may vary.

      Reply
  3. terry

    Wow, how FreeNAS has changed. No longer is it viable to even think about running FreeNAS. Goodbye boys. It was nice while it lasted.

    $10,000 worth of equipment for a NAS computer is too much for me.

    Reply
    • Michael Dexter

      The article describes a pretty high-end system. The FreeNAS Mini is a 10th the price and is available on Amazon.

      Reply
  4. WD Red Drives

    Building my first NAS box. WD Red drives come with a feature called NASware 3.0. Is this something to disable? Background: Other box components: Asus P8B mobo, Xeon E3-1200v2 CPU, 8GB of DDR3 1600MHz ECC RAM. I plan on using two 4TB WD Red drives. The unit will serve an association with 20 – 35 off-board sites.

    Reply
    • Brett Davis

      Our FreeNAS Mini ships with WD Red drives, and it’s not something that we disable.

      Reply
  5. Michael S

    I’m considering purchasing this LSI HBA:
    http://www.newegg.com/Product/Product.aspx?Item=N82E16816118218
    Since this is not an inexpensive item I want to make sure I understand your recommendations on LSI storage controllers. You state that the FreeNAS BSD driver is based on version 16 of the stock LSI driver. You then recommend that we use version 16 of the LSI FIRMWARE (driver version == firmware version?). What if I receive this adapter with some other version of the firmware. Are there utilities in FreeNAS to flash the controller to version 16?

    Reply
  6. Dave Trowbridge

    I picked up a Dell Poweredge 860 on Freecycle and the guy who gave it to me (an IT tech from UCSC, I think) recommended using FreeNAS on it. But your specs require a multicore CPU, and this doesn’t have that. Must I give up? I can’t afford to buy a server.

    Reply
    • Michael Dexter

      This represents a high-end system. Give it a go but do try to meet the minimum 8GB RAM requirement.

      Reply
  7. Axel Mertes

    We consider to have SSD caching for both ZIL and L2ARC.

    As far as I understood the recommendations here I should tend to have a battery protected server level SSD for ZIL, while L2ARC may use more “standart” devices.

    I currently look at the SAMSUNG 845DC Pro, as all those Intel 3×00 SSDs are simple out of reach, price-wise. I may even split up by using 845DC Pro for ZIL and 850 EVOs for L2ARC.

    I plan to have like 4-8 TBytes in total for SSD cache on a potentially mirrored pool with ~64 TByte as of now.

    Some questions here:

    1. Which controller would you recommend to hook up the SSDs to the mainboard (Supermicro)?
    Would an LSI 2308 be enough?
    I think this one is SATA-III and does it support SSDs?

    2. I have 4 RAID enclosures which I’d like to set up as follows:
    All running as JBODs, presenting each disk individually to the host via 4 GBit FC.
    I have two FC ports per enclosure, so I’d like to present 8 drives per port.
    In total I need then 8 FC ports on the host computer, ikely using two 2364 Qlogic quad port cards.
    Each group of 8 disks becomes a vDev as RAIDZ2 (used to use RAID6 before, though…).
    Two enclosures become one pool, the others two a mirror pool. As we have only single controllers in the enclosures, I think this keeps us safe in terms of a controller failure (two vDev failing) or failing vDev due to 3 disks failing inside a vDev.

    With these chassis we can see about 1600 MByte/s throughput, read and write on sequential transfers (on Windows server up until now). Will that still be the same with ZFS?

    I read somewhere that a vDev is only as fast as a single disk inside the vDev. If that is true, we would be on a poor performance road. Is that really so?

    Is that the best config or what would you recommend to gain more performance?

    3. As we touch about 1-2 TByte of data in reads & writes during a single day, I believe having a cache of twice that size for the ZIL and L2ARC may be sufficient to feel like its all pure SSD.
    Is that a misconception?
    How would you outbalance SSD size for ZIL and L2ARC?
    Do I need dedicated ZIL and L2ARC for each pool?
    Can I have a dedicated “working pool” with SSD cache, while having a pure nearline mirror backup pool without?
    Would that affect total performance?

    I know, lots of questions.

    Any comments would be helpful!

    Best regards
    Axel

    Reply
  8. El Gordito

    The 4TB Red drives are rated for a 1-8-bay NAS. Please help me understand, how can the drive dictate the limits of the NAS bays? Or to look at it from another angle; what would happen if I put 24 4TB Red drives into a suitable enclosure? How/Why would the individual drives care how many siblings it has?

    Reply
    • jkh

      Short answer: They don’t!

      Reply
    • PaulH

      The drives don’t care. The manufacturers are concerned about two things if too many drives are used together, temperature and vibration. A good enclosure can mitigate temperature. Some can mitigate vibration. If you plan on using more than 8 drives, consider both.

      Reply
  9. Tom

    Shopping for NAS Drives for custom FreeNas build I’m doing. Want 4ea of 6TB. WD or Seagate or HGST – – which brand would you recommend for this investment. Thanks, T

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *