We’re Expanding!

Written by Annie Zhang on .

As you may know, last year saw an increase in demand for our storage and server products and that demand continues to grow. When we started out, we were just a small group of close-knit people. Over the years, we steadily added more and more talent to our company and before we knew it, we had grown to over 100 employees.

Of course, with that growth comes challenges. To put it simply, we don’t have enough space for employees to sit.

We had this problem about a year ago, so we renovated the software engineering area to add more desks. But here we are, out of space again. Some suggested using a double-tiered desk, but clearer heads prevailed with the solution of acquiring a second building.

doubledesker

Meet our second office. It will be the new home of FreeNAS as well as house TrueNAS software engineering and support. It’s down the same road from our current office. We’re not moving, just expanding. The iXsystems headquarters office is unchanged, as is our contact information.

The obvious benefit of this expansion is that people will have their own work spaces again and as we hire new employees, they will have a place to sit. The expansion will also allow us to cluster departments together.

It will be finished shortly. We expect to finish it in the next few months. When we’re all settled in, we’ll let you know, so feel free to drop by and check it out.

Yes, You Can Virtualize FreeNAS

Written by Joshua Paetzel on .

FreeNAS is the world’s most popular open source storage OS, and one of the more popular questions I get asked is, “How do I run FreeNAS as a VM?” Due to the number of caveats required to answer that question, I would typically short-circuit the conversation by recommending against it, or only recommend it for test environments since the prerequisite knowledge required to “do it right” can’t be passed on quickly. Somehow over time, this message morphed into a general consensus that “you cannot (or shouldn’t) virtualize FreeNAS at all under any circumstances”, which wasn’t my intention. So, I’m here to set the record straight once and for all: You absolutely can virtualize FreeNAS.

virtualization

Whether you are test driving the functionality of FreeNAS, testing an upgrade for compatibility in your environment, or you want to insulate your FreeNAS system from hardware faults, virtualization can provide many well understood benefits. That said, while FreeNAS can and will run as a virtual machine, it’s definitely not ideal for every use case. If you do choose to run FreeNAS under virtualization, there are some caveats and precautions that must be considered and implemented. In this post I’ll describe what they are so that you can make well-informed choices.

Before we get started though, I should probably start with a disclaimer…

Warning

If best practices and recommendations for running FreeNAS under virtualization are followed, FreeNAS and virtualization can be smooth sailing. However, failure to adhere to the recommendations and best practices below can result in catastrophic loss of your ZFS pool (and/or data) without warning. Please read through them and take heed.

Ok, phew. Now that that’s over with, let’s get started.

1. Pick a virtualization platform

When developing FreeNAS we run it as a VM. Our virtualization platform of choice is VMware, and it’s the platform in which the FreeNAS developers have the most experience. FreeNAS includes VMware tools as well.

virtualization3

Our second choice for a virtualization platform is CItrix XenServer. FreeNAS has no tools built in for XenServer, but you get a solid virtualization experience nonetheless. Other hypervisors such as bhyve, KVM, and Hyper-V also work, but the development team does not use them on a daily basis.

2. Virtualizing ZFS

ZFS combines the roles of RAID controller, Volume Manager, and file system, and since it’s all three in one, it wants direct access to your disks in order to work properly. The closer you can get ZFS to your storage hardware, the happier ZFS is, and the better it can do its job of keeping your data safe. Things like native virtual disks or virtual disks on RAID controllers insulate ZFS from the disks, and therefore should be avoided whenever possible. Using a hypervisor, you typically have a disk on a RAID controller presented to a hypervisor which creates a datastore with a disk on it running FreeNAS. This places two layers between ZFS and the physical disks which warrants taking the following precautions.

Precautions

  1. If you are not using PCI passthrough (more on that below), then you must disable the scrub tasks in ZFS. The hardware can “lie” to ZFS so a scrub can do more damage than good, possibly even permanently destroying your zpool.
  2. The second precaution is to disable any write caching that is happening on the SAN, NAS, or RAID controller itself. A write cache can easily confuse ZFS about what has or has not been written to disk. This confusion can result in catastrophic pool failures.
  3. Using a single disk leaves you vulnerable to pool metadata corruption which could cause the loss of the pool. To avoid this, you need a minimum of three vdevs, either striped or in a RAIDZ configuration. Since ZFS pool metadata is mirrored between three vdevs if they are available, using a minimum of three vdevs to build your pool is safer than a single vdev. Ideally vdevs that have their own redundancy are preferred.

3. Consider the Use Case

Is this a production or non-production FreeNAS application? The answer to this question has significant implications to the subsequent recommended practices.

Non-Production

If your use case is a test lab, science experiment, pre-upgrade checks of a new version, or any other situation where real data that you care about isn’t at stake, go ahead and virtualize. Create a VM with 8GB of RAM, two vCPUs, a 16GB install disk, and a single data disk of whatever size is appropriate for your testing, and go to town.

Production

This is where things get serious. If you’re using FreeNAS in an application that’s relied on for daily operations, this is considered a “Production Environment”, and additional precautions must be followed closely to avoid downtime or data loss.

If you use PCI passthrough (aka DirectPath I/O), you can then use FreeNAS just like it’s installed on physical hardware. The PCI device passthrough capability allows a physical PCI device from the host machine to be assigned directly to a VM. The VM’s drivers can use the device hardware directly without relying on any driver capabilities from the host OS. These VMware features are unavailable for VMs that use PCI passthrough:

  • Hot adding and removing of virtual devices
  • Suspend and resume
  • Record and replay
  • Fault tolerance
  • High availability
  • DRS
  • Snapshots

To use PCI passthrough, you need to use a host bus adapter (HBA) supported by FreeNAS (we recommend LSI controllers of the 2008 chipset variety, which are 6Gb/s and well supported in FreeNAS) as a PCI passthrough device that is connected directly to your FreeNAS VM. The FreeNAS VM then has direct access to the disks. Make sure to adhere to the guidelines on using PCI passthrough. If you use PCI passthrough, it is as if you aren’t virtualizing at all so you’ll be safe to use FreeNAS in a production scenario.

4. Other Considerations

If you are still interested in virtualizing FreeNAS, pay attention to the following:

Virtualization Requirements

Adhere to the FreeNAS hardware recommendations when allocating to your FreeNAS VM. It goes without saying that virtualized FreeNAS is still subject to the same RAM and CPU requirements as a physical machine. When you virtualize FreeNAS your VM will need:

    • At least two vCPUs
    • 8GB or more of vRAM, at least 12GB of vRAM if you use jails/plugins
    • Two or more vDisks
    • A vDisk at least 16GB in size for the OS and boot environments
    • One or more vDisks at least 4GB in size for data storage, at least 3 are recommended
    • A bridged network adapter

Striping your vDisks

In this configuration, ZFS will be unable to repair data corruption but it will be resilient against pool corruption caused by damage to critical pool data structures causing loss of the entire pool. If you are using a SAN/NAS to provision the vDisk space, then three striped 1TB drives will require 3TB of external LUN usable space.

RAIDZ protection of your vDisks

In this configuration, ZFS will repair data corruption. However, you will waste an additional virtual disk worth (or two if RAIDZ2 is used) of space since the external storage array protects the LUN and RAIDZ creates parity to protect each vDisk. If you are using a SAN/NAS to provision the vDisk space, then three RAIDZ1 1TB drives will require 4.5TB of external LUN usable space.

Disk space needed for provisioning

With striping, you’ll be required to provision 3TB of space from the SAN/NAS storage array to get 3TB of usable space. If you use RAIDZ protection, it will use one of the virtual disks for parity, and you will be required to provision 4.5 TB of space from the SAN/NAS storage array to get 3TB of usable space. Depending on the $/GB for your SAN/NAS this additional 1,500 GB can get quite expensive.

TL;DR Summary

I have attempted to share some best practices that the engineering team at iXsystems has used while virtualizing, and I hope that I have not missed anything big. With so many different hypervisors, it is difficult to give you specific instructions. You need to take some precautions to utilize your setup in a production environment safely:

      • PCI passthrough of an HBA: This is the best case and ideally recommended
      • If using a RAID controller/SAN/NAS, Write cache: Disabled
      • FreeNAS scrub tasks: Disabled unless PCI passthrough is used
      • Disk configuration
        • Single disk: Vulnerable to pool metadata corruption, which could cause the loss of the pool. Can detect — but cannot repair — user data corruption.
        • Three or more virtual disks striped (even if they are from the same datastore!): Resilient against pool corruption. Can detect — but cannot repair — corrupted data in the pool. Depending on what backs the vDisks you may be able to survive a physical disk failure, but it is unlikely that the pool will survive.
        • Three or more virtual disks in RAIDZ: Can detect and repair data corruption in the pool, assuming the underlying datastore and/or disks are functional enough to permit repairing by ZFS’ self-healing technology.
        • Never ever run a scrub from FreeNAS when a patrol read, consistency check, or any other sort of underlying volume repair operation, such as a rebuild, is in progress.

Some other tips if you get stuck:

      • Search the FreeNAS Manual for your version of FreeNAS. Most questions are already answered in the documentation.
      • Before you ask for help on a virtualization issue, always search the forums first. Your specific issue may have already been resolved.
      • If using a web search engine, include the term “FreeNAS” and your version number.

As an open source community, FreeNAS relies on the input and expertise of its users to help improve it. Take some time to assist the community; your contributions benefit everyone who uses FreeNAS.

To sum up: virtualizing FreeNAS is great—the engineering organization and I have used it that way for many years, and we have several VMs running in production at iXsystems. I attempted to provide accurate and helpful advice in this post and as long as you follow my guidance, your system should work fine. If not, feel free to let me know. I’d love to hear from you.

Josh Paetzel
iXsystems Senior Engineer