The FreeNAS Hardware Guide You’ve Asked For | Does ZIL Size Matter? Issue #18

Written by Annie Zhang on .

Hello FreeNAS Users,

We’re proud to present several guides this month, including one that’s frequently requested: an official FreeNAS hardware guide direct from the developers of FreeNAS. We’ve been working on it for a while and we hope you find it helpful.

Cheers,
The FreeNAS Team
A Complete Guide to FreeNAS Hardware Design
Check out the definitive FreeNAS hardware guide authored by the one and only Josh Paetzel, a core member of the FreeNAS team and iXsystems Director of IT:

Why ZIL Size Matters (or Doesn’t) by Marty Godsey
Marty Godsey, Sales Engineer at iXsystems, explains how ZIL size can affect performance and the other factors that you need to take into account to get the best performance from your system. Read more >>

FreeBSD Journal

 

How to install MiniDLNA into FreeNAS 9.3
One of our forum community members, joeschmuck, wrote a helpful, step-by-step tutorial for manually installing MiniDLNA in a jail on FreeNAS. Read more >>
FreeNAS Certification Classes
We now offer a free Intro to FreeNAS class that runs every day. For those of you interested in learning more about advanced topics, we also offer paid, fully interactive classes. Read more >>
6 Reasons Why TrueNAS is replacing NetApp and EMC – Free Webinar
What’s the difference between FreeNAS and TrueNAS? For the answer, we invite you to join Matt Olander, Co-Founder of iXsystems, in a free webinar about TrueNAS. Find out why people are making the switch from big-name, legacy storage vendors to TrueNAS. Read more >>
Upcoming Live Events

TechTip #14
FreeNAS will automatically check for updates every night, or you can check manually whenever you want. You can then apply them at any time.
Join the Team
iXsystems, the company that sponsors FreeNAS, is looking for a few good people to join our team. Interested? The full list of available positions can be found on our website.
Links of the Month

 

A Complete Guide to FreeNAS Hardware Design, Part IV: Network Notes & Conclusion

Written by Joshua Paetzel on .

Network

FreeNAS is a NAS and/or IP-SAN (via iSCSI)…which means everything happens over the network. If you are after performance, you are going to want good switches and server grade network cards. If you are building a home media setup, everything might be happening over wireless, in which case network performance becomes far less critical (there really is a difference in performance between a Cisco 2960G or Juniper EX4200 and a Netgear or Dlink! This difference becomes more pronounced if you are doing vlans, spanning tree, jumbo frames, L3 routing, etc).

In the current landscape, gigE networking is nearly ubiquitous and 10Gbe networking is expensive enough to keep it out of the hands of many home and small business setups. If you have a number of users and appropriate switch gear, you can benefit from aggregating multiple gigE network connections to your FreeNAS box. Modern hard drives approach, and oftentimes exceed, the performance of gigE networking when doing sequential reads or writes. Modern SSDs exceed gigE networking for sequential or random read/write workloads. This means that — on the low end — a FreeNAS system with a 3 drive RAIDZ pool and a single gigE network connection can hit a bottleneck at the network for performance, since the volume will be able to read or write sequentially at 200+ MB/sec and the network will be limited to ~115MB/sec. If your application is IOPs bound instead of bandwidth bound (such as a database or virtualization platform), and your storage is comprised of spinning disks, you might find that a single gigE connection is sufficient for a dozen or more disks.

Intel NICs are the best game in town for Gigabit networking with FreeNAS. The desktop parts are fine for home or SOHO use. If your system is under-provisioned for CPU or sees heavy usage, the server parts will have better offload capabilities and correspondingly lower CPU utilization. Stay away from Broadcom and Realtek interfaces if and when possible.

In the Ten Gigabit arena, Chelsio NICs are hands down the best choice for FreeNAS. There’s a significant premium for these cards over some alternatives, so second and third choice would be Emulex and Intel (In that order). FreeNAS includes drivers for a number of other 10Gbe cards but these are largely untested by the FreeNAS developers.

Fibre Channel

Options here are very limited. Qlogic is pretty much the only game in town. The 16Gb parts do not have a driver yet and the 1Gb parts are no longer supported, so you’ll be limited to the 8Gb, 4Gb and 2Gb parts. Fiber initiator mode works out of the box, and the “easter egg” to enable Target mode is well documented and tested.

Boot Devices

FreeNAS was originally designed to run as a read-only image on a small boot device. The latest versions now run read/write using ZFS. A SATA DOM or small SSD is a great boot device for the latest versions. Since ZFS is used, the boot device itself can be mirrored. As an alternative to a SATA DOM or SSD, one or more high quality USB sticks can be used. As an absolute minimum, the boot device must be 4GB, however 8GB is a more comfortable and recommended minimum. Beyond 16GB in size, the space will be mostly unused. Since the boot device can’t be used for sharing data, installing FreeNAS to a high capacity hard drive is not recommended.

Conclusion

Hardware configuration is one of the most prominent and active categories in the FreeNAS forum. I have attempted to share some best practices that we at iXsystems have seen over the years and I hope that I have not missed anything big. With so many options and use cases, it’s difficult to come up with a set of one-size-fits-all instructions. Some other tips if you get stuck:

  1. Search the FreeNAS Manual for your version of FreeNAS. Most questions are already answered in the documentation.
  2. Before you ask for help on a specific issue, always search the forums first. Your specific issue may have already been resolved.
  3. If using a web search engine, include the term “FreeNAS” and your version number.

As an open source community, FreeNAS relies on the input and expertise of its users to help improve it. Take some time to assist the community; your contributions benefit everyone who uses FreeNAS.

To sum up: FreeNAS is great—I’ve used it for many years and we have several instances running at iXsystems. I attempted to provide accurate and helpful advice in this post and as long as you follow my guidance, your system should work fine. If not, feel free to let me know. I’d love to hear from you.

Josh Paetzel
iXsystems Director of IT

<< Part 3/4 of A Complete Guide to FreeNAS Hardware Design, Pools, Performance, and Cache

A Complete Guide to FreeNAS Hardware Design, Part III: Pools, Performance, and Cache

Written by Joshua Paetzel on .

ZFS Pool Configuration

ZFS storage pools are comprised of vdevs which are striped together. vdevs can be single disks, N-way mirrors, RAIDZ (Similar to RAID5), RAIDZ2 (Similar to RAID6), or RAIDZ3 (there is no hardware RAID analog to this, but it’s a triple parity stripe essentially). A key thing to know here is a ZFS vdev gives the IOPs performance of one device in the vdev. That means that if you create a RAIDZ2 of ten drives, it will have the capacity of 8 drives but it will have the IOPs performance of a single drive. The need for IOPs becomes important when providing storage to things like database servers or virtualization platforms. These use cases rarely utilize sequential transfers. In these scenarios, you’ll find larger numbers of mirrors or very small RAIDZ groups are appropriate choices. At the other end of the scale, a single user trying to do a sequential read or write will benefit from a larger RAIDZ[1|2|3] vdev. Many home media server applications do quite well with a pool comprising a single 3-8 drive RAIDZ[1|2|3] vdev.

FreeNAS Volumes
RAIDZ1 gets a special note here. When a RAIDZ1 loses a drive, all the other drives in the vdev become single points of failure. A ZFS storage pool will not operate if a vdev fails. This means if you have a pool made up of a single 10 drive RAIDZ vdev and one drive fails, pool operation depends on none of the remaining 9 drives failing. In addition, with modern drives being as large as they are, rebuild times are not trivial. During the rebuild period, all of the drives are doing increased I/O as the array rebuilds. This additional stress can cause additional drives in the array to fail. Since a degraded RAIDZ1 can withstand no additional failures, you are very close to “game over” there. Powers of 2 pool configuration: there is much wisdom out there on the internet about the value of configuring ZFS vdevs in a power of two. This made some sense when building ZFS pools that did not utilize compression. Since FreeNAS utilizes compression by default (and there are 0 cases where it makes sense to change the default!), any attempts to optimize ZFS with the vdev configuration are foiled by the compressor. Pick your vdev configuration based on the IOPs needed, space required, and desired resilience. In most cases, your performance will be limited by your networking anyway.

ZIL Devices

ZFS can use dedicated devices for its ZIL (ZFS intent log). This is essentially the write cache for synchronous writes. Some workflows generate very little traffic that would benefit from a dedicated ZIL, others use synchronous writes exclusively and, for all practical purposes, require a dedicated ZIL device. The key thing to remember here is the ZIL always exists in memory. If you have a dedicated device, the memory ZIL is mirrored to the dedicated device, otherwise it is mirrored to your pool. By using an SSD, you reduce latency and contention by not utilizing your data pool (which is presumably comprised of spinning disks) for mirroring the in-memory ZIL. There’s a lot of confusion surrounding ZFS and ZIL device failure. When ZFS was first released, dedicated ZIL devices were essential to data pool integrity. A missing ZIL vdev would render the entire pool unusable. With these older versions of ZFS, mirroring the ZIL devices was essential to prevent a failed ZIL device from destroying the entire pool. This is no longer the case with ZFS. Missing ZIL vdevs will impact performance but will not cause the entire pool to become unavailable. However, the conventional wisdom that the ZIL must be mirrored to prevent data loss in the case of ZIL failure lives on. Keep in mind that the dedicated ZIL device is merely mirroring the real in-memory ZIL. Data loss can only occur if your dedicated ZIL device fails and the system crashes with writes in transit in the unmirrored memory ZIL. As soon as the dedicated ZIL device fails, the mirror of the in-memory ZIL moves to the pool (in practice, this means you have a window of a few seconds where a system is vulnerable to data loss following a ZIL device failure). After a crash, ZFS will attempt to replay the ZIL contents. SSDs themselves have a volatile write cache, so they may lose data during a bad shutdown. To ensure the ZFS write cache replay has all of your inflight writes, the SSD devices used for dedicated ZIL devices should have power protection. HGST makes a number of devices that are specifically targeted as dedicated ZFS ZIL devices. Other manufacturers such as Intel offer appropriate devices as well. In practice, only the designer of the system can determine if the use case warrants a professional enterprise grade SSD with power protection or if a consumer-level device will suffice. The primary characteristics here are low latency, high random write performance, high write endurance, and, depending on the situation, power protection.

L2ARC Devices

ZFS allows you to equip your system with dedicated read cache devices. Typically, you’ll want these devices to be lower latency than your main storage pool. Remember that the primary read cache used by the system is system RAM, which is orders of magnitude faster than any SSD. If you can satisfy your read cache requirements with RAM, you’ll enjoy better performance than if you use SSD read cache. In addition, there is a scenario where an L2ARC read cache can actually drop performance. Consider a system with 6GB of memory cache (ARC) and a working set that is 5.9 GB. This system might enjoy a read cache hit ratio of nearly 100%. If SSD L2ARC is added to the system, the L2ARC requires space in RAM to map its address space. This space will come at the cost of evicting data from memory and placing it in the L2ARC. The ARC hit rate will drop, and misses will be satisfied from the (far slower) SSD L2ARC. In short, not every system can benefit from an L2ARC. FreeNAS includes tools in the GUI and at the command line that can determine ARC sizing and hit rates. If the ARC size is hitting the maximum allowed by RAM, and if the hit rate is below 90%, the system can benefit from L2ARC. If the ARC is smaller than RAM or if the hit rate is 99.X%, adding L2ARC to the system will not improve performance. As far as selecting appropriate devices for L2ARC, they should be biased towards random read performance. The data on them is not persistent, and ZFS behaves quite well when faced with L2ARC device failure. There is no need or provision to mirror or otherwise make L2ARC devices redundant, nor is there a need for power protection on these devices.

Joshua Paetzel
iXsystems Director of IT

<< Part 2/4 of A Complete Guide to FreeNAS Hardware Design: Hardware Specifics

Part 4/4 of A Complete Guide to FreeNAS Hardware Design: Network Notes & Conclusion >>