Yes, You Can Virtualize FreeNAS

FreeNAS is the world’s most popular open source storage OS, and one of the more popular questions I get asked is, “How do I run FreeNAS as a VM?” Due to the number of caveats required to answer that question, I would typically short-circuit the conversation by recommending against it, or only recommend it for test environments since the prerequisite knowledge required to “do it right” can’t be passed on quickly. Somehow over time, this message morphed into a general consensus that “you cannot (or shouldn’t) virtualize FreeNAS at all under any circumstances”, which wasn’t my intention. So, I’m here to set the record straight once and for all: You absolutely can virtualize FreeNAS.

virtualization

Whether you are test driving the functionality of FreeNAS, testing an upgrade for compatibility in your environment, or you want to insulate your FreeNAS system from hardware faults, virtualization can provide many well understood benefits. That said, while FreeNAS can and will run as a virtual machine, it’s definitely not ideal for every use case. If you do choose to run FreeNAS under virtualization, there are some caveats and precautions that must be considered and implemented. In this post I’ll describe what they are so that you can make well-informed choices.

Before we get started though, I should probably start with a disclaimer…

Warning

If best practices and recommendations for running FreeNAS under virtualization are followed, FreeNAS and virtualization can be smooth sailing. However, failure to adhere to the recommendations and best practices below can result in catastrophic loss of your ZFS pool (and/or data) without warning. Please read through them and take heed.

Ok, phew. Now that that’s over with, let’s get started.

1. Pick a virtualization platform

When developing FreeNAS we run it as a VM. Our virtualization platform of choice is VMware, and it’s the platform in which the FreeNAS developers have the most experience. FreeNAS includes VMware tools as well.

virtualization3

Our second choice for a virtualization platform is CItrix XenServer. FreeNAS has no tools built in for XenServer, but you get a solid virtualization experience nonetheless. Other hypervisors such as bhyve, KVM, and Hyper-V also work, but the development team does not use them on a daily basis.

2. Virtualizing ZFS

ZFS combines the roles of RAID controller, Volume Manager, and file system, and since it’s all three in one, it wants direct access to your disks in order to work properly. The closer you can get ZFS to your storage hardware, the happier ZFS is, and the better it can do its job of keeping your data safe. Things like native virtual disks or virtual disks on RAID controllers insulate ZFS from the disks, and therefore should be avoided whenever possible. Using a hypervisor, you typically have a disk on a RAID controller presented to a hypervisor which creates a datastore with a disk on it running FreeNAS. This places two layers between ZFS and the physical disks which warrants taking the following precautions.

Precautions

  1. If you are not using PCI passthrough (more on that below), then you must disable the scrub tasks in ZFS. The hardware can “lie” to ZFS so a scrub can do more damage than good, possibly even permanently destroying your zpool.
  2. The second precaution is to disable any write caching that is happening on the SAN, NAS, or RAID controller itself. A write cache can easily confuse ZFS about what has or has not been written to disk. This confusion can result in catastrophic pool failures.
  3. Using a single disk leaves you vulnerable to pool metadata corruption which could cause the loss of the pool. To avoid this, you need a minimum of three vdevs, either striped or in a RAIDZ configuration. Since ZFS pool metadata is mirrored between three vdevs if they are available, using a minimum of three vdevs to build your pool is safer than a single vdev. Ideally vdevs that have their own redundancy are preferred.

3. Consider the Use Case

Is this a production or non-production FreeNAS application? The answer to this question has significant implications to the subsequent recommended practices.

Non-Production

If your use case is a test lab, science experiment, pre-upgrade checks of a new version, or any other situation where real data that you care about isn’t at stake, go ahead and virtualize. Create a VM with 8GB of RAM, two vCPUs, a 16GB install disk, and a single data disk of whatever size is appropriate for your testing, and go to town.

Production

This is where things get serious. If you’re using FreeNAS in an application that’s relied on for daily operations, this is considered a “Production Environment”, and additional precautions must be followed closely to avoid downtime or data loss.

If you use PCI passthrough (aka DirectPath I/O), you can then use FreeNAS just like it’s installed on physical hardware. The PCI device passthrough capability allows a physical PCI device from the host machine to be assigned directly to a VM. The VM’s drivers can use the device hardware directly without relying on any driver capabilities from the host OS. These VMware features are unavailable for VMs that use PCI passthrough:

  • Hot adding and removing of virtual devices
  • Suspend and resume
  • Record and replay
  • Fault tolerance
  • High availability
  • DRS
  • Snapshots

To use PCI passthrough, you need to use a host bus adapter (HBA) supported by FreeNAS (we recommend LSI controllers of the 2008 chipset variety, which are 6Gb/s and well supported in FreeNAS) as a PCI passthrough device that is connected directly to your FreeNAS VM. The FreeNAS VM then has direct access to the disks. Make sure to adhere to the guidelines on using PCI passthrough. If you use PCI passthrough, it is as if you aren’t virtualizing at all so you’ll be safe to use FreeNAS in a production scenario.

4. Other Considerations

If you are still interested in virtualizing FreeNAS, pay attention to the following:

Virtualization Requirements

Adhere to the FreeNAS hardware recommendations when allocating to your FreeNAS VM. It goes without saying that virtualized FreeNAS is still subject to the same RAM and CPU requirements as a physical machine. When you virtualize FreeNAS your VM will need:

    • At least two vCPUs
    • 8GB or more of vRAM, at least 12GB of vRAM if you use jails/plugins
    • Two or more vDisks
    • A vDisk at least 16GB in size for the OS and boot environments
    • One or more vDisks at least 4GB in size for data storage, at least 3 are recommended
    • A bridged network adapter

Striping your vDisks

In this configuration, ZFS will be unable to repair data corruption but it will be resilient against pool corruption caused by damage to critical pool data structures causing loss of the entire pool. If you are using a SAN/NAS to provision the vDisk space, then three striped 1TB drives will require 3TB of external LUN usable space.

RAIDZ protection of your vDisks

In this configuration, ZFS will repair data corruption. However, you will waste an additional virtual disk worth (or two if RAIDZ2 is used) of space since the external storage array protects the LUN and RAIDZ creates parity to protect each vDisk. If you are using a SAN/NAS to provision the vDisk space, then three RAIDZ1 1TB drives will require 4.5TB of external LUN usable space.

Disk space needed for provisioning

With striping, you’ll be required to provision 3TB of space from the SAN/NAS storage array to get 3TB of usable space. If you use RAIDZ protection, it will use one of the virtual disks for parity, and you will be required to provision 4.5 TB of space from the SAN/NAS storage array to get 3TB of usable space. Depending on the $/GB for your SAN/NAS this additional 1,500 GB can get quite expensive.

TL;DR Summary

I have attempted to share some best practices that the engineering team at iXsystems has used while virtualizing, and I hope that I have not missed anything big. With so many different hypervisors, it is difficult to give you specific instructions. You need to take some precautions to utilize your setup in a production environment safely:

      • PCI passthrough of an HBA: This is the best case and ideally recommended
      • If using a RAID controller/SAN/NAS, Write cache: Disabled
      • FreeNAS scrub tasks: Disabled unless PCI passthrough is used
      • Disk configuration
        • Single disk: Vulnerable to pool metadata corruption, which could cause the loss of the pool. Can detect — but cannot repair — user data corruption.
        • Three or more virtual disks striped (even if they are from the same datastore!): Resilient against pool corruption. Can detect — but cannot repair — corrupted data in the pool. Depending on what backs the vDisks you may be able to survive a physical disk failure, but it is unlikely that the pool will survive.
        • Three or more virtual disks in RAIDZ: Can detect and repair data corruption in the pool, assuming the underlying datastore and/or disks are functional enough to permit repairing by ZFS’ self-healing technology.
        • Never ever run a scrub from FreeNAS when a patrol read, consistency check, or any other sort of underlying volume repair operation, such as a rebuild, is in progress.

Some other tips if you get stuck:

      • Search the FreeNAS Manual for your version of FreeNAS. Most questions are already answered in the documentation.
      • Before you ask for help on a virtualization issue, always search the forums first. Your specific issue may have already been resolved.
      • If using a web search engine, include the term “FreeNAS” and your version number.

As an open source community, FreeNAS relies on the input and expertise of its users to help improve it. Take some time to assist the community; your contributions benefit everyone who uses FreeNAS.

To sum up: virtualizing FreeNAS is great—the engineering organization and I have used it that way for many years, and we have several VMs running in production at iXsystems. I attempted to provide accurate and helpful advice in this post and as long as you follow my guidance, your system should work fine. If not, feel free to let me know. I’d love to hear from you.

Josh Paetzel
iXsystems Senior Engineer

22 Comments

  1. Brian

    So this means that Atom 2758 based motherboards are a bad choice since they lack VT-d. Darn.

    Reply
    • Michael Dexter

      Unfortunately, yes, and it takes a pretty advanced user to notice this. Kudos!

      Reply
  2. Josh F

    It’s good to finally see an official statement regarding this!

    Reply
  3. Benjamin Bryan

    Thanks for posting that, Josh. Glad to see FreeNAS under a VM is finally official! I have a 9.3 server doing weekly scrubs without using PCI pass-through (zpool is on vmdks) without issues so far, what are the problems or causes you’ve found that lead to your recommendation to disabling scrubbing? Are vmdks not able to keep up?

    Reply
  4. unholythree

    Will xen-tools be included in any future builds of FreeNAS?

    Reply
  5. nzalog

    Does this mean VMXNET3 drivers preinstalled on freenas distorts any time soon? I’ve been having to hack them in…

    Reply
  6. Patrick

    As much as I’ve been flamed on the forums historically for saying that it works just fine and there are no issues if designed correctly, it’s nice to see an official stance on the subject.

    More of a chicken and egg scenario I have is do you run jails while virtualizing FreeNAS or just other VM’s to accomplish other services…nested-vm overhead that’s click-click done vs manual patching but possibly better performance.

    Reply
    • George Hafiz

      I suspect the virtualisation solutions of ESX and BSD jails to be plenty efficient for nesting, unless you’re talking about performance intensive tasks running in your jails. Either way, I’d suggest taking the path of least resistance initially. Optimise performance when you have performance problems, never before.

      Reply
  7. Thomas W.

    Okay,

    So what I’m reading is that I can decommission my old ATOM processor that I’m using for FreeNAS, and spin up a VM in my XenServer? I know that when I read this article it talks about home labs and ‘testing’ things out, but what about if I just want to use this as an every day home file server, that at most deals with TimeMachine backups?

    Reply
    • Michael Dexter

      FreeNAS is still considered a bare-metal OS with virtualization only being suggested for testing and demonstrations. However, “Yes, You Can Virtualize FreeNAS” and your mileage may vary. Do feel free to share your findings on XenServer.

      Reply
  8. Golan

    Is it possible to install FreeNAS as VM in vSphere and share the volume via iSCSI? I’d like to use it for a nested lab.

    Anyone tried already?

    Reply
    • Michael Dexter

      Many people test FreeNAS under VMware and you should have no problems with this as long as you take the article’s advice into consideration.

      Reply
  9. Kice

    I don’t want an extra RAID card to run FreeNAS on a VM. Can I do it without it, and can FreeNAS work as normal?

    Reply
    • Michael Dexter

      You never want to use a RAID card with FreeNAS unless explicitly in JBOD mode. If referring to passing through a HBA, your VM host may support the individual passing of drives through but “virtual” disks are not recommended for production use. They are quite useful for testing and demonstrations.

      Reply
  10. Shawn

    Any chance of basic Hyper-V support (just network drivers would be idea to start with). Windows Server Hyper-V is free and a pretty good alternative to VMWare.

    Reply
    • Michael Dexter

      FreeNAS 9.2.1 had reasonable Hyper-V support but one is generally discouraged from virtualizing FreeNAS. What issues are you experiencing under Hyper-V?

      Reply
  11. abinyah

    I have (against all online/forum wisdom) been using FreeNAS ZFS under VMWare for +3 years now on a Dell T320 8 bay. I can attest to the best practice recommendations above, and also caution against less than 8GB RAM and RAID controllers for all the reasons mentioned above. But that doesn’t mean it won’t work.
    I have broken every rule above and found that a Virtualized FreeNAS ZFS is more reliable than most other alternatives. FreeNAS has been one of the best things for Virtualization since the baremetal Hypervisor.

    I use ZFS for photography archiving (along with separate backups), and so far, ZFS has been the most stable storage setup I’ve had over the last ten years (including the extra external backups, go figure).

    If I had to choose today, I would get another T320 8 or 18 bay and have FreeNAS run baremetal. As a VM, I would not hesitate to run it even two levels removed from the hardware. BTW, I did have HDD failures over the years, and hot swapped out a drive on the PERC and FreeNAS handled it flawlessly. Still I wouldn’t recommend any of that without first having a good backup.

    Reply
  12. igor

    I tried to run installation on Server 2012 r2 and having problem with keyboard data entry. Cant send input command to start install

    Reply
    • Joshms

      Double check and make sure you aren’t missing any guest additions.

      Reply
  13. Tommaso Ercole

    xen-tools are included since the first release of 9.10 😀

    Reply
  14. Ron Watkins

    Questions…
    1) If I disable the host controller cache, then the random R/W speed drops from 1GB/s to 10-20MB/s, which is a significant reduction in performance. Since we need to keep the speed up to 1GB/s, keeping the controller cache enabled is desirable. What downsides do you see with the controller cache? From what I can tell, it will keep any data safe, even through power failure using the FBWC. I don’t see how this can “hurt” in any way. Please elaborate.
    2) I plan to have a set of 1Tb vDisks presented to the FreeNAS VM. These vDisks will come from ESXi datastores, which are flushed when the ESXi server shuts down. The UPS powerware software first commands the VM’s to perform a clean shutdown, followed by ESXi, and then the host. A crash of ESXi is much less likely than a crash of FreeNAS, so im not worried about the underlying HW. Im assuming I just create a ZFS pool using the set of 1Tb vDisks from ESXi?
    3) How can I attach the QLE2562 8GB FC controllers directly to the VM to allow them to be used to export the target luns I create on FreeNAS. I think this has something to do with passthrough mode, but I don’t know how to set that up. Can you clarify how that’s done?

    Reply
    • Joshms

      This would be a better question for the forums. It would be really hard to respond to so much technical detail here. Give the forums a try!

      Reply

Submit a Comment

Your email address will not be published. Required fields are marked *