Jay Taylor's notes

back to listing index

Building a NAS

[web search]
Original source (jro.io)
Tags: zfs diy freebsd nas raid jro.io
Clipped on: 2017-04-13



Image (Asset 1/17) alt=

A view from the side. The network equipment on top of the table is also connected to the UPS. The other computer on the left (my main desktop) is not on the UPS.

Image (Asset 2/17) alt=

A shot of the inside of the system.

Image (Asset 3/17) alt=

Another view of the inside of the system. Note the strip of wood tied to the top of the HDD fan wall. This is to prevent air from flowing back over the top of the wall, thus cutting the flow through the drives.

Image (Asset 4/17) alt=

Here is a screenshot of the mounted share. The FreeNAS devs recommend not going under 20% free space, but the OpenZFS devs say 10%+ is okay. See here for more info.

Image (Asset 5/17) alt=

Parts and Price List

Part Make/Model Qty $ Per $ Total From
Chassis SuperMicro SC846 1 $200 $200 eBay
Motherboard SuperMicro X10SRL-F 1 $272 $272 Amazon
CPU Intel Xeon E5-1630v3 1 $373 $373 SuperBiiz
RAM Samsung M393A2G40DB0-CPB (16GB) 4 $80 $318 Amazon
HBA IBM M1015 2 $75 $150 eBay
PSU SuperMicro PWS-920P-SQ 2 $118 $236 eBay
Backplane SuperMicro BPN-SAS-846A 1 $250 $250 eBay
Boot Device Intel 540s 120GB SSD 2 $53 $105 Amazon
CPU Cooler Noctua NH-U9DXi4 1 $56 $56 Amazon
120mm Fan Noctua NF-F12 iPPC 3000 PWM 3 $24 $72 Amazon
80mm Fan Noctua NF-R8 PWM 2 $10 $20 Amazon
UPS APC SUA1500RM2U Smart-UPS 1 $300 $300 eBay
SSD Cage SuperMicro MCP-220-84603-0N 1 $25 $25 eBay
SAS Cable SFF-8087 to SFF-8087 4 $11 $44 Amazon
HDD Screws SuperMicro HDD Screws (100 ct) 1 $8 $8 Amazon
Lack Rack Lack Coffee Table 1 $25 $25 IKEA
Tax Tax 1 $199 $199 -
Data Drive WD 8TB Red (8 + 1 Spare) 9 $319 $2,871 Amazon
Data Drive WD 4TB Red (Already Owned) 8 $0 $0 -
Data Drive WD 4TB Red (Spare) 1 $157 $157 Amazon
Total (No HDD): $2,795
Total (HDDs): $3,028
Grand Total: $5,823

Parts Selection Details

My primary objectives when selecting parts were as follows:

  1. Allow for up to 24 drives,
  2. Be able to saturate a gigabit ethernet line on SMB/CIFS
  3. Have a quiet enough system that it can sit next to me in my office

Redundancy and overall system stability were also obvious objectives and led me to select server-grade components wherever appropriate. Here’s a breakdown of the reasoning behind each part I selected:

  • Chassis: SuperMicro SC846 – I purchased this used on eBay for $200 shipped. It retails for close to $1000, so it’s a pretty good deal. I considered the Norco RPC-4224 for a while, but the SuperMicro chassis is much higher quality and has better thermal design (i.e. bottomless HDD sleds for better airflow). The specific chassis I purchased came with an older version of the SC846 backplane that doesn’t support larger-capacity volumes, so I had to buy a different backplane. The PSUs the chassis came with are really loud, so I purchased some quieter ones. The stock system fans are also loud, so I replaced those too. More information on the replacement backplane, PSUs, and fans below. I currently have 8 open drive bays to allow for future expansion.

  • Motherboard: SuperMicro X10SRL-F – This is SuperMicro’s basic LGA2011 server board. An LGA1151-based board would have worked but the SC846 chassis doesn’t have mounting holes for micro ATX boards and the full ATX versions of SuperMicro’s LGA1151 boards are very expensive (apparently the 846 does have micro ATX mounting holes, so I could have saved a fair bit of coin there. Oh well... thanks /u/techmattr on reddit for pointing this out.). LGA2011 will also allow me to add more RAM if I ever need to.

  • CPU: Intel Xeon E5-1630v3 – With 4 cores/8 threads at 3.7GHz, this has the highest single core clock speed in this family of Xeons, which is really nice for SMB/CIFS. I had to get it on SuperBiiz because it’s typically only sold to systems integrators, but it was a painless experience despite my initial misgivings.

  • RAM: Samsung M393A2G40DB0-CPB (4x16GB, 64GB total) – This is one of the SuperMicro-recommended RAM models for the X10SRL-F motherboard; all the other models are really hard to find. I went with ECC RAM because scary cosmic radiation (and because the FreeNAS community convinced me ECC was an absolute requirement). See this interesting article on more information on ZFS and the supposed dire need for ECC RAM. The RAM is also registered/buffered because the DIMM size is so large. 64GB is on the high side, but 32GB is the next step down and that would have been cutting it close. When I eventually fill the last 8 HDD bays in my chassis, I shouldn’t have to upgrade my RAM.

  • Host Bus Adapter (HBA): IBM M1015 – These are flashed to IT mode so the FreeNAS OS has direct access to the drives for SMART data, etc. Each card handles 8 drives and I’ve got room for another card if (see: when) I need to populate the last 8 bays in the chassis.

  • Data Drives: Western Digital Red (4TB & 8TB) – I was already running the 8x 4TB drives in my desktop, so I initially built the NAS around 8x 8TB drives, moved the data from the 8x 4TB drives in my desktop to the NAS, then moved the 8x 4TB drives into the NAS and expanded the pool. I bought a spare drive in 4TB and 8TB to have on hand in case of a failure. People like WD Red for NAS, but HGST would have been a good option too. I’ve found that the 8TB WD Red drives tend to run much hotter than the 4TB WD Reds.

  • Power Supply Unit (PSU): SuperMicro PWS-920P-SQ – These are the 920W redundant PSUs for the SC846 chassis and are much quieter than the stock 900W PSUs that came pre-installed in my SC846. I got them new/open box from eBay for $120 each. I guess the “-SQ” stands for “super quiet”? SuperMicro doesn’t state anywhere in any of their documentation or sales material that these are a quieter option; they don’t even give acoustic dB level ratings for their PSUs. I found out about these PSUs after reading user reviews on Amazon and Newegg. Whatever, they’re really quiet despite the lack of supporting marketing material.

  • Backplane: SuperMicro BPN-SAS-846A – The SuperMicro SC846 backplane lineup is very confusing, so a high-level overview of the more common SC846 backplanes might be helpful:

    • BPN-SAS-846EL1 - This is the backplane that came in my server, but it’s listed as end-of-life (EOL) by SuperMicro. It has a built-in SAS expander chip but it isn’t SAS2 capable so the maximum total capacity of the array is limited (to exactly what capacity, I am not sure). In other words, you might be able to populate 24 * 2TB drives and see all 48TB, but if you try 24 * 4TB drives, it might only see 60TB. I have no idea what the actual limitations are; these numbers are purely arbitrary examples.

    • BPN-SAS-846A - This is the backplane I purchased. It’s basically a SAS breakout cable baked into a PCB, so no expander chip to potentially cap the total array capacity. It has 6 mini-SAS ports on the back, each of which directly connects to 4 hard drives - BPN-SAS2-846TQ - This backplane has 24 individual SATA ports directly connected to the 24 drives on the other side of the board. It’s very simple and a decent option, but cabling can be messy. These can also be found at a reasonable price on eBay.

    • BPN-SAS2-846EL1 - This is the SAS2-capable expander-based backplane. This is usually reasonably priced and would have been a good option in my build, but I had a hard time finding one on eBay when I was doing all my purchasing. If it has a maximum total array capacity, it’s large enough that you shouldn’t have to worry about it. With this backplane, you would only need to use one port on a single M1015 card and the backplane would expand that single connection out for all 24 hard drives. However, this would cause a slight bottleneck with most platter drives (you would get ~24Gb/s on the SAS link, so roughly 1Gb/s or 125MB/s per drive). I’ve seen some people on forums claim that you can connect two HBA cards to this backplane to double the total bandwidth to 48Gb/s, but this is not documented anywhere by SuperMicro; they say the other two mini-SAS ports are only for cascading with other storage systems.

    • BPN-SAS2-846EL2 - The same as the above -846EL1, but with a second expander chip to support failing over to a redundant (set of) HBA card(s). These tend to be $600+ on eBay when you can find them.

    • BPN-SAS3-846EL1 & -846EL2 - The same as the above two items, but with a SAS3 capable expander chip (or 2 chips in the case of the -846EL2).

    I’ll also note here the equally-confusing SC846 chassis model numbers are based on the backplane they include and their PSUs. You can double-check this by going to the product page on SuperMicro’s site and clicking “See Parts List”.

  • Boot Device: Intel 540s 120GB SSD – This is the cheapest SATA Intel SSD I could find. People typically use USB drives for their boot device, but for a build like this, the gurus recommend using SSDs for increased system reliability. The controllers on most USB drives are pretty unreliable, so it was a worthwhile upgrade for me.

  • CPU Cooler: Noctua NH-U9DXi4 – This is a basic server-grade CPU cooler from the much-respected Noctua. I was initially nervous about its fit with my motherboard and chassis, but it ended up working out pretty well. While it does provide enough vertical clearance for DIMMs installed in the RAM slots closest to the CPU socket (at least with these Samsung DIMMs), it’s so close that I’ll probably have to remove the cooler to actually perform the installation in those slots. You can sort of see what I mean here (same case exists on the other side); notice the RAM slot just under the edge of the cooler in the pictures here and here.

  • HDD Fans: Noctua NF-F12 iPPC 3000 PWM – The SC846 comes with 3 80mm HDD fans (which are absurdly loud) mounted to a metal “fan wall”. Fortunately, the fan wall is removable and 3x 120mm fans fit perfectly in its place. I zip-tied the 120mm fans together and used tape-down zip tie mounts to secure them to the chassis. I started with Noctua NF-F12 1500 RPM fans, but some of the drives were getting a bit hot under heavy load, so I switched to their 3000 RPM model. I have pictures of the fan wall install process and more information in the section below.

  • Rear Fans: Noctua NF-R8 PWM – As I mentioned above, the stock chassis fans are super loud. These Noctua 80mm fans fit perfectly in their place. I was able to use the hot-swap fan caddies that came with the chassis, but I have it bypassing the hot-swap plug mechanism that SuperMicro built in.

  • Uninterruptable Power Supply (UPS): APC SUA1500RM2U Smart-UPS – I got this from eBay, used chassis with a new battery. The total load capacity is 980W and with the server and all my network gear on it, it sits around 25-30% load. It’s working really well for and FreeNAS comes with drivers for it, so I can monitor all sorts of stats and have it shut down the server automatically on longer power outages.

  • SSD Cage: SuperMicro MCP-220-84603-0N – This was $25 shipped on eBay and is probably the best way to secure a pair of 2.5” drives inside the chassis other than of double-sided tape or zip-ties.

  • Server Rack: LackRack Enterprise Edition – I’m using a Lack Coffee Table from IKEA with some reinforcement on the lower shelf to serve as a rack for the server and UPS. The LackRack is only temporary, but for $25 it’s done remarkably well. I have metal corner braces on each leg to provide extra support to the lower shelf and a 2x4 piece propping up bottom of the lower shelf in the middle. I have some more notes on the Lack Rack assembly process in the section below.

  • Misc: The chassis did not come with HDD screws, so I got a baggie from Amazon for a few dollars (make sure the ones you get will fit flush in the drive cages, otherwise you won’t be able to insert your drives). I picked up the SAS cables from Monoprice via Amazon. I got a 3-to-1 PWM fan splitter so I could attach all the HDD fans to the FANA header on the motherboard (more on this below). I also used a ton of zip ties and some tape-down zip-tie mounts to make the cables all nice and tidy.

I’m very happy with the parts selection and I don’t think I would change anything if I had to do it again. I have a few future upgrades in mind, including a proper rack and rails, getting another M1015 and filling the filling the 8 empty HDD bays, and replacing the 4TB drives with 8TB drives, but the current setup will probably hold me for a while.

Build Process

For the most part, the system build was pretty similar to a standard desktop computer build. The only non-standard steps I took were around the HDD fan wall modification, which I discussed briefly in the section above. The stock fan wall removal was pretty easy, but some of the screws securing it are hidden under the hot swap fan caddies, so I had to remove those first. With the fan wall structure out of the way, there were only two minor obstructions left – the chassis intrusion detection switch and a small metal tab near the PSUs that the fan wall screwed in to. The intrusion detection switch was easily removable by a pair of screws and I cut the small metal tab off with a Dremel (but you could probably just bend it out of the way if you wanted to). With those gone, the chassis was ready for my 120mm fan wall, but because the fans would block easy access to the backplane once they’re installed, I waited until the very end of the build to install them.

With the fan wall gone, swapping out the EOL backplane (which came pre-installed in my chassis) for the new version I purchased was pretty easy. Some of the screws are a little tough to access (especially the bottom one closest to the PSUs), but they all came out easily enough with some persistence. There are 6x Molex 4-pin power connectors that plug into the backplane to provide power to all the drives. The SuperMicro backplanes have a ton of jumpers and connectors for stuff like I2C connectors, activity LEDs, and PWM fans, but I didn’t use any of those. Drive activity information is carried over the SAS cable and all my fans are connected directly to the motherboard. If you’re interested, check the backplane manual on the SuperMicro website for more information on all the jumpers and connectors.

After I swapped out the backplane, the motherboard, CPU, RAM, CPU cooler, PSUs, SSDs, and HBA cards all went in like a standard computer build. The only noteworthy thing about this phase of the installation was the orange sticker over the motherboard’s 8 pin power connector that reads “Both 8pins required for heavy load configuration”. It’s noteworthy because there is only one 8 pin power connector on the board... Maybe they meant the 8 pin and 24 pin power connectors? Whatever the case may be, just make sure both the 8 pin power and 24 pin power connectors are attached and you’ll be fine. I also made note of the SAS addresses listed on the back of each of the M1015 cards before installing them. The SAS address is printed on a sticker on the back of the card and should start with “500605B”, then there will be a large blank space followed by 9 alpha-numeric characters interspersed with a couple of dashes. These addresses are needed in the initial system configuration process.

As this was my first server build, I was a little surprised that unlike consumer computer equipment, server equipment doesn’t come with any of the required screws, motherboard standoffs, etc., that I needed to mount everything. Make sure you order some extras or have some on-hand. I ordered a 100-pack of SuperMicro HDD tray screws on Amazon for $6 shipped; I would recommend using these screws over generic ones because if you use screws that don’t sit flush with the HDD sled rails, you’ll have a lot of trouble getting the sled back in the chassis and could even damage the chassis backplane.

As I mentioned above, the CPU cooler I’m using provides enough vertical clearance for the RAM, but I will probably have to remove the cooler to actually get the RAM into the slot if I ever need to add RAM. This isn’t a huge deal as the cooler is really very easy to install. I will note here that the cooler came with 2 different sets of mounting brackets for the LGA2011-v3 narrow ILM system so you can orient the airflow direction either front-to-back or side-to-side (allowing you to rotate the cooler in 90 degree increments). Obviously, for this system, I wanted air flowing in from the HDD side and out the back side, so I had to use the appropriate mounting bracket (or, more accurately, I realized there were two sets of narrow ILM brackets only after I installed the incorrect set on the cooler).

The front panel connector was a little confusing as the non-maskable interrupt (NMI) button header is in the same assembly on the motherboard as all the front panel headers (this header assembly is marked “JF1” on the motherboard and is not very clearly described in the manual). The connectors for all the front panel controls and LEDs are also contained in one single plug with 16 holes and no discernible orientation markings. After studying the diagrams in the motherboard manual, I was able to determine that the NMI button header pins are the 2 pins on this JF1 assembly that are closest to the edge of the motherboard, then (moving inwards) there are 2 blank spots, and then the 16 pins for the rest of the front panel controls and LEDs. The 16 pin front panel connector plugs into these inner 16 pins and should be oriented so the cable exits the 16 pin connector towards the PSU side of the chassis. Basically, if you have the front panel connector plugged into these 16 pins but the power button isn’t working, try flipping the plug around. If you have an NMI button (not included in the stock chassis), it will plug into those last 2 pins closest to the motherboard’s edge. If you don’t have an NMI button, just leave those pins empty.

I also swapped out the rear fans for quieter Noctua 80mm models at this point. The only way to mount them in the chassis is with the hot swap caddies (the chassis isn’t drilled for directly-mounted fans), but the process is pretty straight-forward. The stock fans have very short cables, maybe 1 inch long, because the PWM connectors are mounted onto the side of the caddie so they can mate with the hot-swap plug on the chassis itself when you slide the caddie into its “rail” system. That plug connects to what is essentially a PWM extension cable mounted to the caddie rails which connects the fans to the motherboard’s PWM fan headers. I took out this whole hotswap mechanism because the Noctua fan cables are much longer than the stock fan cables and the Noctua PWM connectors are missing a small notch on the plug that is needed to secure it in the hot swap caddie. It’s tough to describe, but it would be pretty obvious what I mean if you examine the rear fan caddies yourself.

With all the server guts installed and connected, I did some basic cable management and prepared to install my 120mm fan wall. I started by using zip-ties to attach the 3 fans together (obviously ensuring they would all blow in the same direction). The Noctua fans have soft silicone pads in each corner, so vibrations shouldn’t be a big issue if you get the pads lined up right. I put the fan wall in place in the chassis and marked off where the zip tie mounts should be placed with a marker, stuck the mounts on the marks (4 in total on the bottom), and used more zip ties to mount the fan wall in place. With the bottom of the fan wall secured in place, the whole thing is pretty solid, but I added one more zip tie mount to the top of the wall on the PSU side. This sort of wedges the fan wall in place and makes it all feel very secure. Once the fans were secure, I connected them to the 3-to-1 PWM fan splitter, attached that to the FANA header (this is important for the fan control script discussed later), and cleaned up all the cables.

While I’m talking about the HDD fan wall, I’ll also mention here that after running the server for a few days, I noticed some of the drive temperatures were in the low 40s (Celsius), much higher than they should be. The Noctua fans I originally had installed maxed out at 1500 RPMs, but I decided I would be safer with the Noctua iPPC fans that could hit 3000 RPM. I have a fan control script running (more on that below), so they hardly ever need to spin faster than 1500 RPM, but it’s nice to know the cooling is there if I ever need it. In addition to upgrading my original fans, I made a few minor modifications to improve overall cooling efficiency for the whole system:

  1. I used masking tape to cover the ventilation holes on the side of the chassis. These holes are on the hard drive side of the fan wall and are intended to prevent the stock fans from starving, but with lower speed fans they allow air to bypass the hard drives which cuts the total cooling efficiency.

  2. I cut pieces of index cards and used masking tape to block air from flowing through the empty drive bays. The air flow resistance through the empty bays was much lower than it was through the populated bays so most of the air was bypassing the hard drives. You can see a picture of it here.

  3. Air was flowing from the CPU side of the HDD fan wall back over the top of the fans rather than coming through the HDD trays, so I cut a long ~3/4” thick strip of wood to block the space between the top of the fans and the chassis lid. I measured the wood strip to be a very tight fit and zip-tied it to the fans to secure it in place. I even cut out little divots where the zip ties cross the top of the wood strip to be extra cautious. You can see this wood strip in the 3rd and 4th pictures in the section above.

With these simple modifications in place, effective airflow to the drives increased dramatically and HDD temps dropped by ~10C even with fan speeds under 1500 RPM. You can check relative airflow levels to the hard drive bays by holding a piece of paper up in front of the drive bays and observing the suction force from the incoming air. With a heavy workload, the fans sometimes spin up to 2000 RPM for a minute or two, but overall the system is very quiet. The fan control script I’m running is set to spin up when any drive gets above 36C.

The last step in the system build was to get all the hard drives loaded into their sleds and slide them into the chassis. If you aren’t populating all 24 bays in the chassis, be sure to note which mini-SAS ports connect to which bays; this is labeled on the rear of the backplane and in the backplane manual.

With everything built, I could load the server and the UPS into the LackRack. The UPS went on the floor and the server went on the lower shelf. I have all my networking gear on the top shelf along with some other knick-knacks. Assembly of the LackRack itself was pretty easy, but there were a few minor things worth noting. I picked up some basic metal corner braces from a hardware store for reinforcement of the lower shelf; they’re around 3” long and 3/4” wide and seem to work pretty well. I mounted the braces to the legs of the table and the underside of the lower shelf with basic wood screws. The lower shelf is only ~1/3” thick, so I got very stubby screws for that side of the brace. When measuring how low or high to install the lower shelf, I forgot to make sure leave enough room for the server to sit in the space and had to re-do part of the installation at a lower height. For a 4U server (like the one I’ve got), you’ll need a smidge over 7”, so the shelf has to go an inch or two lower than the IKEA instructions would have you mount it. The legs of the table (into which I mounted the braces) are very light weight; it feels like they’re totally hollow except for a small solid area around the middle where you’re supposed to mount the tiny IKEA-provided braces that come with the table. Don’t over-tighten the screws you put into the legs even a little bit, otherwise it will completely shred out the wood and won’t be very secure. In fact, while I was installing one of the braces, I leaned on my screw gun a bit too hard and before I even started to turn the screw, it broke through the outer “wall” of the leg and just went chonk and fell down into place. Not a confidence-inspiring event while building the “rack” that will soon house my ~$5,000 server... Regardless, with all the corner braces installed, the two shorter ends of the shelf seem pretty sturdy. However, the shelf is so thin that it would have started to sag (and could have possibly broken) with any weight in the middle. With a file server, most of the weight is in the front due to the drives, but I thought it was still a good idea to brace the middle of the shelf from the underside. I cut a short piece of 2x4 that I could use to prop up the middle of the lower shelf from underneath.

With everything installed and mounted, I was finally ready to power on the system for the first time and move on to the installation and configuration process!

Flashing the M1015 HBA Cards & Installing FreeNAS

I was pretty lucky and my server POST’d on the first try. Before actually installing an OS, I needed to flash the M1015 cards with the IT mode firmware. This article has instructions on that process. The download linked in that article goes down quite a bit, so I’ve rehosted the necessary firmware files here [.zip file]. This file contains 3 DOS applications (sas2flsh.exe, megarec.exe, and dos4gw.exe), the IT firmware image (2118it.bin), the BIOS image file (mptsas2.rom), and 2 empty files for backing up stock card firmware (sbrempty0.bin and sbrempty1.bin; I’m not sure if these two are strictly necessary, but they’re literally empty, so whatever).

I used Rufus to create a bootable DOS USB drive and copied in the files from the above .ZIP archive. Before performing the rest of the process, it is a good idea to visit the controller manufacturer’s website to make sure you’re using the most recent firmware image and BIOS. They change the layout and URL of the official Broadcom website that hosts the firmware, so just search Google for “SAS 9211-8i firmware”, find the downloads section, and open the firmware sub-section. The versions are marked by “phase” numbers; the firmware/BIOS version I included in the above ZIP file is from phase 20 or “P20” as it’s listed on the site. If a more recent version is available, download the MSDOS and Windows ZIP file, find the BIOS image (called mptsas2.rom) and the IT firmware (called 2118it.bin; you do not want the IR firmware called 2118ir.bin) and copy them both onto your bootable USB drive overwriting the files I provided.

With the SAS addresses I wrote down during the build process on hand, I booted from my USB drive into the FreeDOS shell and executed the following from the DOS terminal:


megarec -listall
sas2flsh -listall
	

These are the two utilities used to wipe and reflash the cards, so it’s important to make sure they can actually see the cards. After that, I backed up and wiped each of the cards:


megarec -writesbr 0 sbrempty0.bin
megarec -writesbr 1 sbrempty1.bin
megarec -cleanflash 0
megarec -cleanflash 1
	

(Reboot back to USB drive.)

The 0 and 1 in these commands tell megarec which card/controller to back up and then wipe. If you had a different number of cards, you would want to modify this command sequence appropriately. Once I backed up and wiped all the cards, I rebooted the server. When it came online (again in FreeDOS), I could flash the cards with the IT mode firmware using the following commands:


sas2flsh -o -f 2118it.bin -c 0
sas2flsh -o -f 2118it.bin -c 1
sas2flsh -o -sasadd 500605bXXXXXXXXX -c 0
sas2flsh -o -sasadd 500605bXXXXXXXXX -c 1
	

(Shut down and remove USB drive.)

There are a couple of things to note here. As above, the -c 0 and -c 1 at the end of these commands specify the controller number. If you’re also following the guide I linked above, you may notice that I’ve left out the flag to flash a BIOS (-b mptsas2.rom) in the first set of commands. This is because I don’t need a BIOS on these cards for my purposes; you will need the BIOS if you want to boot from any of the drives attached to the controller (but don’t do that... Either use USB drives or connect your SSDs directly to the motherboard SATA ports). I’ve included the latest BIOS file in the zip just in case someone needs it; just add -b mptsas2.rom to the end of the first (set of) command(s), but again, you really shouldn’t need it. The last thing to note is the SAS addresses in the second set of commands. The XXXXXXXXX part should be replaced with last part of the SAS address of that controller (without the dashes). Make sure the address matches up with the correct card; you can run sas2flsh -listall again to check the PCI addresses if you aren’t sure which controller number maps to which physical card. After these commands, I powered down the server, removed the USB drive, and prepared to install FreeNAS.

I downloaded the latest FreeNAS 9.10 ISO from here, used Rufus again to make a bootable USB drive with it, and started the install process by booting off the USB stick. The FreeNAS installation process in very easy. When selecting the boot volume, I checked off both my SSDs and FreeNAS handled the mirroring automatically. After the installation finished, I rebooted the system from the SSDs and the FreeNAS web UI came online a few minutes later.

Initial FreeNAS Configuration

The very first thing I did in the FreeNAS configuration is change the root password and enable SSH. I also created a group and user for myself (leaving the home directory blank to start with) so I didn’t have to do everything as root. If you’re having trouble getting in via SSH, make sure the SSH service is actually enabled; in the web UI, go to Services > Control Services and click the SSH slider to turn the service on.

With SSH access set up, I connected to a terminal session with my new FreeNAS machine and followed this guide on the FreeNAS forums for most of my initial setup, with a few minor modifications. The text in this section is largely based off that guide. My first step is to determine the device names for all the installed disks. You can do this by running:


camcontrol devlist
	

After determining the device names, I did a short SMART test on each of my drives using:


smartctl -t short /dev/da<#>
	

Where da<#> is the device name from the camcontrol devlist output. The test only takes a couple minutes and you can view the results (or the ongoing test progress) using:


smartctl -a /dev/da<#>
	

After checking that all the SMART tests passed, I created my primary volume. My process was a little non-standard because I moved my 4TB drives into the server after I transferred the data off them, so I’ll go through my process first and discuss the standard process afterwards. However, before diving into that, I want to review how ZFS allocates disk space and how it can be tuned to minimize storage overhead (by as much as 10 percent!). This next section gets pretty technical and if you aren’t interested in it, you can skip it for now.

Calculating & Minimizing ZFS Allocation Overhead

Calculating the disk allocation overhead requires some math and an understanding of how ZFS stripes data across your disks when storing files. Before we get into the math, let’s take a look at how ZFS stores data by discussing two examples:

  1. Storing a very small file, and

  2. Storing a large(r) file.

We’ll start out with the small file. Hard disks themselves have a minimum storage unit called a “sector”. Because a sector is the smallest unit of data a hard disk can write in a single operation, any data written to a disk that is smaller than the sector size will still take up the full sector. (Quick footnote here: it is technically possible for a drive to write a unit of data smaller than its sector size by reading a given sector, modifying only part of its contents, and re-writing it. Obviously this sequence of three operations will be a lot slower than simply writing a full sector’s worth of data.) On older hard drives (pre ~2010), the user data portion of a sector (the part we care about) is typically 512 bytes wide. Newer drives (post ~2011) use 4096-byte sectors (4KiB, or simply 4K). (Another quick footnote on sector sizing: each hard disk sector also has some space for header information, error-correcting code (ECC), etc., so the total sector size is actually 577 bytes on older drives and 4211 bytes on newer drives, but we only care about the portion in each sector set aside for user data; when I refer to a “sector”, I’m referring only to the user data portion of that sector.)

Because the hard disk sector size represents the smallest possible unit of storage on that disk, it is obviously a very important property for ZFS to keep track of. ZFS keeps track of disk sector sizes through the “alignment shift” or ashift parameter. The ashift parameter is calculated as the base 2 logarithm of a hard disk’s sector size and is set per virtual device (“vdev”). ZFS will attempt to automatically detect the sector size of its drives when you create a vdev; you should always double-check that the ashift value is set accurately on your vdev as some hard disks do not properly report their sector size. For a vdev made up of older disks with 512-byte sectors, the ashift value for that vdev will be 9 (29=51229=512). For a vdev made up of newer disks with 4096-byte sectors, the ashift value for that vdev will be 12 (212=4096212=4096). (Obviously, mixing disks with 512-byte sectors and disks with 4096-byte sectors in a single vdev can cause issues and isn’t recommended; if you set ashift = 9 in a vdev with 4K drives, performance will be greatly degraded as every write will require the read-modify-write operation sequence I mentioned above in order to complete.) It follows then that 2^ashift then represents the smallest possible I/O operation that ZFS can make on a given vdev (or at least before we account for redundancy data added on by RAID-Z).

Let’s quickly review how data is stored on a “striped” RAID configuration (i.e., RAID 5, RAID 6, RAID-Z, RAID-Z2, and RAID-Z3) before going any further. On these RAID configurations, the data stored on the array will be spread across all the disks that make up that array; this is called “striping” because it writes the data in “stripes” across all the disks in the array. You can visualize this with a 2-dimensional array (like you use in math class); the columns of the array are the individual disks and the rows are the sectors on those disks (a full row of sectors would then be called a “stripe”). When you write data to a striped RAID system, the controller (be it hardware or software) will write that data across the stripes in the array, using one sector per disk (or column). Obviously, when it hits the end of a row in the array, it will loop back around to the first column of the next row and continue writing the data. When we factor in redundancy, we just need to make sure that each stripe in the array has one (or more) sector(s) for parity data; the number of sectors for parity data will depend on the RAID level (one sector for RAID 5 and RAID-Z, two sectors for RAID 6 and RAID-Z2, and three sectors for RAID-Z3). In most striped RAID systems, the parity data is not stored on the same disk(s) in every row otherwise there would be a lot of contention to access that disk (read up on “RAID 4” if you’re curious about this setup). Instead, the parity sectors are staggered, typically in a sort of barber pole fashion, so that when you look at the whole array, each disk as roughly the same number of parity sectors as all the others. Again, this ensures that no one disk is bogged down handling all the parity data for the whole array.

Getting back on track, we were discussing the smallest possible writes one can make to a ZFS array. Small writes will obviously be used for small file sizes (on the order of a couple KiB). The smallest possible write ZFS can make to an array is:

nmin=1+pnmin=1+p

Where p is the parity level (1 for RAID-Z1, 2 for –Z2, etc.) and the 1 represents the sector for the data itself. So nminnmin for various RAID-Z configurations will be as follows:

RAID-Z1: nmin=2RAID-Z1: nmin=2

RAID-Z2: nmin=3RAID-Z2: nmin=3

RAID-Z3: nmin=4RAID-Z3: nmin=4

When ZFS writes to an array, it makes sure the total number of sectors it writes is a multiple of this nminnmin value defined above. ZFS does this to avoid the situation where data gets deleted and you end up with a space on the disk that’s too small to be used (for example, a 2-sector wide space can’t be used by RAID-Z2 because there’s not enough room for even a single data sector and its two parity sectors). Any sectors not filled by user data or parity information are known as “padding”; the data, parity information, and padding make up the “data block”. Padding in ZFS data blocks is one of the forms of allocation overhead we’re going to look at more closely. Study the table below for a better idea of how data block padding can cause storage efficiency loss. Note that this table assumes everything is written to a single stripe; we’ll look at how data is striped and how striping can cause additional overhead in the next section.

Data, Parity, and Padding Sectors with Efficiency (Note: Assumes Single Stripe)
Data
Sectors
Parity SectorsPadding SectorsTotal Sectors
(Block Size)
Efficiency
(Data/Total)
Z1 Z2 Z3 Z1 Z2 Z3 Z1 Z2 Z3 Z1 Z2 Z3
1 1 2 3 0 0 0 2 3 4 50.0% 33.3% 25.0%
2 1 2 3 1 2 3 4 6 8 50.0% 33.3% 25.0%
3 1 2 3 0 1 2 4 6 8 75.0% 50.0% 37.5%
4 1 2 3 1 0 1 6 6 8 66.7% 66.7% 50.0%
5 1 2 3 0 2 0 6 9 8 83.3% 55.6% 62.5%
6 1 2 3 1 1 3 8 9 12 75.0% 66.7% 50.0%
7 1 2 3 0 0 2 8 9 12 87.5% 77.8% 58.3%
8 1 2 3 1 2 1 10 12 12 80.0% 66.7% 66.7%
9 1 2 3 0 1 0 10 12 12 90.0% 75.0% 75.0%
10 1 2 3 1 0 3 12 12 16 83.3% 83.3% 62.5%
11 1 2 3 0 2 2 12 15 16 91.7% 73.3% 68.8%
12 1 2 3 1 1 1 14 15 16 85.7% 80.0% 75.0%
13 1 2 3 0 0 0 14 15 16 92.9% 86.7% 81.3%

If the data you’re writing fits in a single stripe, ZFS will allocate the data block based on the above table, again making sure that the block size is a factor of nminnmin. When the data you’re writing doesn’t fit in a single stripe, ZFS simply stripes the data across all the disks in the array making sure that there is an appropriate quantity of parity sectors per stripe. It will still make sure the size of the block (which is now spread across multiple stripes and contains multiple sets of parity sectors) is a factor of nminnmin to avoid the situation outlined above. When considering how ZFS stripes its data, it’s important to consider that, unlike a traditional RAID-5/6 array, RAID-Z parity information is associated with each block rather than with each stripe; it is possible to have multiple sets of parity sectors on a given disk stripe. The below figures show (roughly) how ZFS might store several data blocks of varying sizes on a 6-wide and an 8-wide RAID-Z2 array (note that I’ve placed the parity sectors at the start of each data block in this diagram; I’m not sure how ZFS actually chooses where to place these parity sectors in the data block, but their exact location in the stripe doesn’t impact any of the results that follow). Data sectors are preceded by a "D", parity sectors by a "P", and padding sectors are indicated by an "X". Each set of colored squares represents a different ZFS datablock.

Image (Asset 6/17) alt=

Examples Three and Four:

Image (Asset 7/17) alt=
  • Long Test: In the same area, set up a long SMART test to run on all your drives every ~15 days. My long test cron config looks like this:

  • Image (Asset 8/17) alt=
  • Set up automatic boot and pool scrubs. A “scrub” is ZFS’s self-healing mechanism. By scrubbing your ZFS volumes regularly, you can automatically and transparently fix disk errors that might otherwise cause issues down the road. For more info on scrubs, check here.

    • In Storage > Scrubs click “Add Scrub” . Select your volume, set the threshold days to “35”, and select a schedule for the scrub so it runs every ~15 days. As I mentioned above, it’s best to not have the main pool scrub ever overlap a long SMART test. My pool scrub settings look like this:

    • Image (Asset 9/17) alt=
    • In System > Boot, set the “Automatic scrub interval” to 15. Boot pool scrubs are typically pretty quick as the boot disk is small compared to the primary storage volume(s). Note that there is not a way to schedule the boot pool scrubs on specific dates and times through the FreeNAS web UI.

  • Schedule automatic email status reports/backups. This will run a simple script to generate an email report of the scrub and SMART test results. The script I’m using is combined/condensed version of several scripts posted on the FreeNAS forums. I’m using the SATA version of the script, but if you have SAS drives, you can make it work for them by changing the smartctl statement at the beginning of the SMART status summary section; refer to the FreeNAS post for more information. This script has lots of room for improvement; I’d like to have it leverage HTML and CSS a bit more to get better formatting. I’d also like to combine the config backup and status report emails into a single message.

    • Get the script from here [.sh file] and put it somewhere on your server; I recommend somewhere in your primary login’s home folder.

    • Modify the parameters section at the top, most importantly (examples are in the script):

      
      email="<the gmail address you set up earlier>"
      subject="<subject line for the email report>"
      drives="<all your drives separated by spaces>"
      ...
      (In "Config Backup Parameters" Section)
      config_subject="<backup email subject>"
      			

    • In the FreeNAS web UI, go to Tasks > Cron Jobs and click “Add Cron Job”. Set the user to “root”, put the full path to the script in the command box, and schedule the report so it runs right after your SMART tests and scrubs occur. My cron settings look like this:

  • As I mentioned above, scheduling the SMART tests, scrubs, and email reports relative to each other is important. As an example, here is what my schedule looks like:

    Image (Asset 10/17) alt=

    The output should list all the sensors in your machine, the sensor values, units, health, and a set of threshold values. For the fans, the sensor values will be in RPM. The sensor thresholds can be rather cryptic, but they are as follows:

    Image (Asset 11/17) alt=

    The system uses these thresholds to control fan speed based on system temperatures (which, you may notice, have their own set of thresholds) and the fan speed profile you have set in the IPMI web UI (which you don’t need to mess with as the fan control script in the next step actually sets the required profile value automatically).

  • Determine the minimum and maximum fan speed spec for all of your system fans, including any that might be mounted on the CPU cooler. These values are typically posted on the fan manufacturer’s website. If it is presented with +/- a certain percentage, subtract that given percentage to calculate the minimum speed and add that given percentage to calculate the maximum speed. For example, on the Noctua NF-F12 iPPC-3000, the minimum speed is listed as 750 RPM +/- 20% and the maximum speed is 3000 RPM +/- 10%. The minimum value I used for these fans is 750-(750*20%) = 600 RPM, and the maximum value I used is 3000+(3000*10%) = 3300 RPM. Use these min and max values as your LNC and UNC, respectively. Note that impitool will round threshold values to the nearest hundred (i.e., 550 rounds up to 600, 540 rounds down to 500).

    • Determine the LCR and LNR values by subtracting 100 and 200 from the LNC value, respectively. With those values, run the following command on SSH (pay careful attention to the sequence of numbers; they should GROW in value from left to right):

    
    ipmitool sensor thresh "<sensor name>" lower <lnr> <lcr> <lnc>
    			

    • Determine the UCR and UNR values by adding 100 and 200 from the UNC value, respectively. With those values, run the following command on SSH (again, pay careful attention to the sequence of numbers; just like before, they should GROW in value from left to right):

    
    ipmitool sensor thresh "<sensor name>" upper <unc> <ucr> <unc>
    			

    • Repeat the above steps for all fans in your system. Also note down all the threshold values for all fans as you will need them in the next step.

  • Set up fan control script. This script is also from the FreeNAS forums. This script assumes you have your cooling zones properly set (i.e., all HDD fans connected to the FANA header on your motherboard, all CPU fans connected to the FAN1 - FAN4 headers). This script is pretty slick, but I’ve been thinking of tinkering with it to make the speed changes ramp more gradually.

    • Get the script from here [.pl file] and put it somewhere on your server; I recommend somewhere in your primary login’s home folder.

    • Set the configuration parameters as needed (here is a listing just the parameters I modified; others I left default):

      • Debug: I had to do some initial troubleshooting with my parameters, so setting this to a higher value was useful at first. I run it at debug=1 now.

      • Fan Speeds: Take the UNC value from the section above for both your HDD fans and your CPU fans and enter them here.

      • CPU/HD Fan Duty Levels: My Noctua HDD fans can spin down to ~25% of max speed without stalling, so I changed duty_low to 25. Do the math on your fans and adjust as needed.

    • Test it by running the following command:

      
      screen ./hybrid_fan_controller.pl
      			

      Watching the output and listen to the system fan speeds. It may take a minute or so to fall into the correct rhythm. If it doesn’t seem like it’s working, check the script settings and the fan thresholds in ipmitool. While troubleshooting, you might also find this simple script [.sh file] helpful. It's from the FreeNAS forums link in the previous section and it outputs all CPU and HDD temps.

    • If it’s working, create a new script file named “start_fanctrl” and paste the following into it (making sure to edit the directories in the last line):

      
      #!/bin/bash
      echo "Starting Hybrid Fan Controller..."
      /<path to fan ctrl script>/hybrid_fan_controller.pl &>> /<path to fan control log>/fan_control.log &
      			

    • Once that script is saved, go into the FreeNAS web UI and set it to run on startup. Go to Tasks > Init/Shutdown Scripts, click “Add Init/Shutdown Script”, set it to “Command”, enter the path to your start_fanctrl script (-NOT- to the larger perl script), and select “Post Init” for when.

  • Setting Up SMB Sharing

    With all the administrative and monitoring settings in place, I could move on to setting up some shares. This section will focus on SMB/CIFS-based shares because that’s what I use, but FreeNAS offers a wide variety of network file sharing protocols. On the subject of SMB/CIFS, Microsoft summarizes the common question “how are SMB and CIFS different” as follows: “The Server Message Block (SMB) Protocol is a network file sharing protocol, and as implemented in Microsoft Windows is known as Microsoft SMB Protocol. The set of message packets that defines a particular version of the protocol is called a dialect. The Common Internet File System (CIFS) Protocol is a dialect of SMB. Both SMB and CIFS are also available on VMS, several versions of Unix, and other operating systems.” The full article text is here. Samba also comes up a lot, which is an open-source *nix SMB server. It can do some other stuff too (related to Active Directory), but the Samba software isn’t really necessary as FreeNAS has built-in support for several SMB “dialects” or versions (including CIFS).

    Getting network file sharing fully configured can be a pain, mostly due to permissions configuration. Because I only work with SMB shares, I do all my permissions management from my primary Windows 10 machine. The Windows machines in my environment (all on Win10) connect over SMB protocol version 3.1.1 (listed as SMB3_11 in smbstatus); the *nix and OS X machines in my environment connect on SMB protocol version NT1. I’ll provide some basic examples from my configuration, but SMB sharing can get very tricky very fast. If you get too complicated, it will become a giant pain a lot faster than it’s worth, so be forewarned. If you find yourself at that point, take a step back and think through possibly simpler ways to accomplish your goal.

    • Start by enabling the SMB service in the FreeNAS web UI (Services > Control Services).

    • Click the wrench icon to access the SMB service settings. Most of the default values are fine here, but I set a NetBIOS name (you probably want to use your FreeNAS hostname) and Workgroup (note the NetBIOS name and Workgroup can not be set to the same value). You may want to set aux parameters here or on the individual shares; I set them on the individual shares so I will cover them below. Read the Services > SMB section in the user guide before changing any of the other parameters on this screen.

    • Once the SMB service is enabled and configured, go to Sharing > Windows (SMB) Shares > Add Windows (SMB) Shares. Again, most of the default values are fine here for most applications. Here are the settings I changed:

      • Path: Enter the path to the dataset you created above (/mnt/tank/britlib in my case).

      • Name: I used my dataset name (britlib) for my share name.

      • Apply Default Permissions: This goes through the share path and recursively sets default permissions. It’s useful to leave this checked for the initial setup, but if you come back in to change any share settings, uncheck it so it doesn’t mess up any permissions changes you made.

        • Export Recycle Bin: I checked this so files I delete from the share are moved to a hidden .recycle directory in the root folder of the share (/mnt/tank/britlib/.recycle). Go in via SSH and delete this .recycle directory from time to time to free up disk space. If you leave this box unchecked, deleted files will be permanently removed immediately.

        • Auxiliary Parameters: I have a few of these set on my primary share. Here’s what I use (each parameter is explained below; see the smb.conf man page for more info:

        
        veto files = /._*/.DS_Store/Thumbs.db/desktop.ini
        delete veto files = yes
        hide dot files = yes
        ea support = no
        store dos attributes = no
        map archive = no
        map hidden = no
        map readonly = no
        map system = no
        			

        • veto files : This is a forward slash (/) separated list of files and directories that you would like to set as neither visible nor accessible. You can use * and ? as wildcards. In this case, I veto any files starting with ._ and files named .DS_Store, Thumbs.db, and desktop.ini. You can adjust your list as needed.

        • delete veto files : Allows you to delete directories with vetoed files in them when you set this to ‘yes’.

        • hide dot files : This controls “...whether files starting with a dot appear as hidden files.”

        • ea support : “...Controls whether smbd(8) will allow net=clients to attempt to store OS/2 style Extended attributes on a share.” This is typically only enabled to support legacy systems, so I have it disabled. See below for more details.

        • store dos attributes : When this is set, “...DOS attributes will be stored onto an extended attribute in the UNIX filesystem, associated with the file or directory.” As above, typically only enabled to support legacy systems.

        • map archive : “This controls whether the DOS archive attribute should be mapped to the UNIX owner execute bit.”

        • map hidden : “This controls whether DOS style hidden files should be mapped to the UNIX world execute bit.”

        • map readonly : “This controls how the DOS read only attribute should be mapped from a UNIX filesystem.”

        • map system : “This controls whether DOS style system files should be mapped to the UNIX group execute bit.”

    The last 6 settings come from the FreeNAS forums post here and are all set to ‘no’ with the goal of speeding up SMB access (specifically, while browsing directories). The first two are ‘no’ by default, but I have them set explicitly. If you have legacy devices or applications that need to access your SMB shares, you may need to set these to ‘yes’, but doing so could cause a performance penalty. Setting all of these parameters to ‘no’ will prevent SMB from using extended attributes (EAs), tell SMB not to store the DOS attributes (any existing bits that are set are simply abandoned in place in the EAs), and will cause the four DOS parameter bits to be ignored by ZFS.

    Once I created the SMB share, I was able to mount it on another machine. In the following examples, I’ll show how to mount the share and manage permissions from a Windows 10 machine. To mount the share on Windows, open a new “My PC” window and click “Map network drive”. Select a drive letter and set the folder as

    
    \\<server hostname or ip>\<smb share name>
    	

    Check “Reconnect at sign-in” if you want the share the be automatically mounted. You’ll likely also need to check “Connect using different credentials”. Once you hit “Finish” (assuming you checked “Connect using different credentials”), you’ll be prompted for connection credentials. Click “More choices”, “Use a different account”, set the username as \\<server hostname>\<the username you made in FreeNAS>, and enter your password. This will let you connect to the share with the credentials you created in FreeNAS rather than credentials stored on the Windows machine. If the username and password combination are exactly the same on FreeNAS and your Windows machine, sometimes you can get away with leaving the domain specification (the \\<server hostname>\ part) out of the username string, but it’s always best to be explicit.

    Adjusting Permissions

    With the share mounted, I could finally move some files in. As I mentioned before, everything that follows will be fairly specific to Windows 10, but you should be able to apply the same process to any modern Windows version. Once you have some data copied over, you can start adjusting the permissions on that data. Open the properties window for a directory and select the Security tab. The system will display a list of groups and user names and each of their respective permissions for this given directory (it may take a second to resolve the group IDs; be patient). You can adjust basic permissions by clicking the “Edit” button; a new window will pop up and you’ll be able to adjust or remove the permissions for each group or user and add new permission definitions. You may notice that the default set of permissions aren’t editable here; this is because they’re inherited from the parent folder (if the folder you’re looking at is in the share’s root directory, its permissions are inherited from the share itself; to adjust those permissions, open the properties window for the mounted share from the “My Computer” window and adjust its settings in the Security tab).

    To adjust permissions inheritance settings for a file or folder (collectively referred to as a “object”), click the Advanced button in the Security tab of the object’s properties window. In this new window (referred to as the “Advanced Security Settings” window) you can see where each entry on the permission list (or “Access Control List”, ACL) is inherited from or if it is defined for that specific object. If you want to disable inheritance for a given folder, you can do so by clicking the “Disable inheritance” button on this window; you’ll then be able to define a unique set of permissions for that object that might be totally different from its parent object permissions. You can also control the permissions for all of this object’s children by clicking the check box “Replace all child object permissions...” at the bottom of the window. We’ll go through the process of adding a read/execute-only ACL entry for the services group to a given folder.

    Open the Advanced Security Settings window for the folder you would like to allow the services group to access (Read/Execute only), click the Add button, click Select a principal at the top of the window (“principal” means user or group), type in services (or whatever user or group you want) and click Check Names. It should find the services group and resolve the entry (if it doesn’t, make sure you’ve actually added a services group in the FreeNAS web UI settings). You can adjust the “Type” and “Applies to” parameters if you like (each option is pretty self-explanatory), but I’m going to assume you’ve left them as the default values. Click “Show advanced permissions” on the right side of the window to view a full list of the very granular permissions that Windows offers. Each of these permissions options are also pretty self-explanatory, and most of the time you can get away with using just basic permissions (meaning you don’t click this “Show advanced permissions” button). For read/execute only, you’ll want to select the following advanced permissions:

    • Traverse folder / execute file

    • List folder / read data

    • Read attributes

    • Read extended attributes

    • Read permissions

    If you click “Show basic permissions”, you will be able to see that this set of selections will translate to:

    • Read & execute

    • List folder contents

    • Read

    You can leave the “Only apply these permissions...” check box unchecked. Go ahead and hit OK to be brought back to the Advanced Security Settings window where you’ll see your new ACL entry added to the list. It’s probably a good idea to check the “Replace all child object permission entries...” box to make sure everything within this folder gets the same set of permissions, but that’s obviously your choice. If you want to add or adjust other permissions, go ahead and do that now. When you’re happy with the settings, hit OK on the Advanced Security Settings window, hit OK on the folder properties window, and wait for it to go through and apply all the permission changes you just made. With the services group granted read/execute access to this folder, you should now be able to connect to it from another device (like a VM, as shown below) via any user in the services group. Once I had all my data moved into my SMB share, I went through and adjusted the permissions as needed by repeating the steps I outlined above.

    I tend to prefer the Advanced Security Settings window (as opposed to the window you get when you hit the “Edit...” button in the Security tab) so I can make sure the settings are applied to all child objects, and the Advanced Security Settings window really isn’t any more difficult to use that the standard settings window. For more info on how to set up SMB share permissions, watch these videos in the FreeNAS resources section.

    One final note here before moving on: if you want to grant a user permissions (whether it be read, execute, or write) to access some file or folder deep in your share’s directory structure, that user will also need at least read permissions for every parent folder in that structure. For example, if you want to grant the user “www” permission to access the directory “httpd” at location //SERVER/share/services/hosting/apache24/httpd, the user “www” will need to have read permission for:

    • //SERVER/share

    • //SERVER/share/services

    • //SERVER/share/services/hosting

    • //SERVER/share/services/hosting/apache24

    ...or else he won’t be able to access the “httpd” folder. In this scenario, you can see how useful automatic inheritance configuration can be.

    Performing Initial bhyve Configuration

    Running virtual machines on a storage system is kind of a controversial subject (as you’ll quickly discover if you ask anything about running a bhyve in #freenas or the forums). In a business environment, it’s probably a good idea to have a dedicated VM host machine, but for personal use, I don’t see it as a huge risk. The VM manager (also called a “hypervisor”) I use is called bhyve (pronounced “beehive”, super-clever developers...). More information on bhyve can be found here. It’s native on FreeNAS 9.10 and setting it up and managing bhyve VMs (simply called “bhyves”) is very easy. There’s a great video on the basics of bhyve setup here (from which I am going to shamelessly copy the following steps).

    Before we get started, make sure you know the name of your pool (called “tank” if you’re following this guide verbatim) and the name of your primary network interface (which you can find by going to the web UI and looking at Network > Network Summary; mine is igb0, highlighted in yellow below).

    Image (Asset 12/17) alt=
  • Creating bhyve VMs:

    • iohyve create <name> <disk size> : Create a new bhyve. For disk size, you can use the M or G suffix (i.e., 4096M or 4G).

    • iohyve set <name> <property>=<value> [<property2>=<value> ...] : Set property values for a bhyve, like the number of CPUs it can use, or amount of RAM it can access; can set multiple properties in one command.

    • iohyve fetchiso <URL> : Download an ISO for bhyve installations.

    • iohyve isolist : List all ISOs you can use for creating bhyve installations.

    • iohyve install <name> <iso> : Install <iso> on selected bhyve; for some installations, console will appear to hang; you will have to open a new SSH session and use the console command (see below).

    • iohyve console <name> : Connect to the console of selected bhyve, sometimes required for new VM OS installation.


  • Manage bhyves:

    • iohyve start <name> : Boot up selected bhyve.

    • iohyve stop <name> : Shut down selected bhyve.

    • iohyve delete [-f] <name> : Delete selected bhyve; add -f flag to force the operation.

    • iohyve forcekill <name> : Force shut down selected bhyve; useful if it’s stuck on the bootloader.


  • Frequently-used bhyve Properties (set with set command, view with get command):

    • name=<name> : Name of bhyve VM.

    • boot=[0|1] : Start bhyve on host system reboot?

    • cpu=<# of threads> : Number of CPU threads bhyve can access.

    • ram=<amount of memory> : Amount of memory bhyve can access; you can use the M or G suffix (i.e., 4096M or 4G).

    • loader=<boot loader> : Boot loader to use; example will use grub-bhyve.

    • os=<os name> : Name of OS to be used; example will use debian.

    • description=<text> : bhyve description; optional.

  • I’ll go through a basic example of installing a Debian bhyve guest (or VM; I’ll use the terms “bhyve”, “guest”, and “VM” interchangeably in this section) and mounting shares from your NAS on the VM so it can access data. The first thing you will want to do is download an ISO with the iohyve fetch command. Note that this is the only (simple) way to use a given ISO to install an OS. I use the Debian amd64 network installation ISO, which you can find here. Don’t download the ISO in your browser, but rather copy the URL for that ISO (for amd64 Debian 8.7.1, it’s at this link) and run the following command on your FreeNAS machine as root:

    
    iohyve fetchiso <paste URL to ISO>
    	

    Wait for it to download the ISO file from the provided link. When it’s done, we can create the VM. For this example, I’m going to create a guest named “acd” (Amazon Cloud Drive) which we’ll use later to set up rclone for full system data backups. I’ll give it 5GB of disk space, 2 CPU threads, and 2GB of RAM. You can change the name, CPU threads, or RAM values later, but note that changing the disk space of the guest later on can cause issues (even though there is an iohyve command for it; check iohyve man page). When you’re ready, run the following commands:

    
    iohyve create acd 5G
    iohyve set acd ram=2G cpu=2 os=debian loader=grub-bhyve boot=1
    	

    The first command will create a new bhyve guest called acd with a 5GB disk. The second command will set the listed properties for that bhyve (2GB RAM, 2 CPU threads, debian-based OS, GRUB bootloader, auto-boot enabled). The next step is to install the Debian ISO on this bhyve guest. Get the name of the ISO file by running the following:

    
    iohyve isolist
    	

    Copy the name of the listed Debian iso, then run the following:

    
    iohyve install acd <paste ISO name>
    	

    The console will appear to hang, but don’t panic! As the terminal output message will tell you, GRUB can’t run in the background, so you need to open a second SSH session with your FreeNAS machine. Once you’re in (again) and have root, run the following command in your second terminal session to connect to the acd console:

    
    iohyve console acd
    	

    This will drop you into the console for your new VM (you may have to hit Enter a few times) where you can go through the Debian installation. Follow the instructions, selecting a root password, new user (for this one, I’d suggest “acd”), and hostname when prompted. Make sure that when you get to the package selection screen, you unselect all desktop environment options and select the SSH server option. Other than that, the Debian installation process is pretty easy. When you’ve finished, the VM will shut itself down and you can close out of this second SSH window. If you ever have to use iohyve console for other purposes, you can exit it by typing ~~. or ~ Ctrl+D.

    Back in your first SSH session, the terminal should be responsive again (you may have a few errors saying stuff about keyboard and mouse input but you can safely ignore those). Run the following command to start the bhyve VM back up:

    
    iohyve start acd
    	

    While you’re waiting for it to boot back up, take a moment to create a new user in the FreeNAS web UI (Account > Users > Add User). Give it whatever user ID you want, but make sure the username and password are exactly the same as the user you created on your bhyve VM. I would also suggest unchecking “Create a new primary group for the user” and selecting the “services” group you created above as this user’s primary group.

    By now, the bhyve VM should be fully booted (typically it only takes 15-30 seconds), so SSH into this new VM with the non-root user account you created; you may need to look at your router’s DHCP tables to figure out its assigned IP address. Once you’re in, you’ll want to run su to get root then update software through apt-get or aptitude and install any standard programs you like (like sudo, htop, ntp, and whatever else you might need). Once sudo is installed and configured (if needed, use Google for help), exit back out to your primary user. The next step will be to mount your SMB share from FreeNAS on your bhyve VM. Most of the following steps are based on this guide from the Ubuntu wiki

    The first thing you need to do is install the cifs-utils package by running:

    
    sudo apt-get install cifs-utils
    	

    I usually mount my shares in the /media directory, so go ahead and create a new directory for your share (I’ll use mountdir in this example, but you can call it whatever you want):

    
    sudo mkdir /media/mountdir
    	

    Next, you will want to create a text file with the login credentials for your VM user. Run the following command to create and open a new text file in your user’s home directory (I use nano here, but use whatever editor you like):

    
    nano ~/.smbcredentials
    	

    In this file, you will want to enter the following two lines of text. I’ll use “acd” as the username and “hunter2” as the password for the example, but obviously change the text in your credentials file. Make sure it’s formatted exactly as shown; no spaces before or after the equal signs:

    
    username=acd
    password=hunter2
    	

    Save and exit (for nano, Ctrl+O to save, then Ctrl+X to exit) then change the permissions on this file:

    
    chmod 600 ~/.smbcredentials
    	

    Next, you’ll want to run the following command to edit the fstab file (the file system table) on your bhyve with root privliges:

    
    sudo nano /etc/fstab
    	

    Add the following line at the bottom of the file, making sure to replace the <server name>, <share name>, and <user name> placeholders with the appropriate values for your system (obviously, leave out the <>; if you named your mount point in the /media directory something different, make sure to change that, too):

    
    //<server name>/<share name>	/media/mountdir	cifs	uid=<user name>,credentials=/home/<user name>/.smbcredentials,iocharset=utf8,sec=ntlm	0	0
    	

    Sorry about the super-long, table-breaking statement... there's probably a way to split the above into two shorter lines, but whatever... Save and exit (for nano, Ctrl+O to save, then Ctrl+X to exit). Once you’re back at the command line, run the following to attempt to mount the share:

    
    sudo mount -a
    	

    If it goes through, try to access the share and list its contents:

    
    cd /media/mountdir
    ls
    	

    If it prints out the contents of your share, you’re all set! If it throws an error, check the permissions on your share, check that the credentials is entered correctly on FreeNAS and in the ~/.smbcredentials file, and check that the VM can resolve the server name to the correct IP (if not, you may have to enter the IP in the mount string you wrote in the fstab file). Mounting shares and getting their permissions set up right can be extremely finicky, so anticipate at least a few issues here.

    You can mount more than one share (or multiple points from a single share) by entering more than one line in the fstab. For example, if you wanted to mount //SERVER/share/photos and //SERVER/share/documents, you would enter both those lines in /etc/fstab:

    
    //SERVER/share/photos       /media/photos       cifs    uid=user,credentials=/home/user/.smbcredentials,iocharset=utf8,sec=ntlm 0   0
    //SERVER/share/documents    /media/documents    cifs    uid=user,credentials=/home/user/.smbcredentials,iocharset=utf8,sec=ntlm 0   0
    	

    Remember to create the /media/photos and /media/documents directories beforehand (otherwise you’ll get an error when you run the mount -a command).

    Once the share is mounted, you’ll be able to access it in the bhyve’s file system as normal. If your user only has read permissions, you’ll obviously get an error if you attempt to modify anything.

    note:

    Configuring rclone in a bhyve

    The last topic I want to cover is the installation and configuration of rclone, which will help keep your data backed up in an Amazon Cloud Drive (ACD). rclone also allows you to encrypt all your data before it’s sent to ACD, so you don’t have to worry about Amazon or the Stasi snooping in on your stuff. ACD is a paid $60/yr service through Amazon.com that offers unlimited data storage, and unlike services like Backblaze and CrashPlan, you can get great upload and download speeds to and from their backup servers. rclone is a program that will connect with your ACD instance (via Amazon-provided APIs), encrypt all your data, and synchronize it with the backup servers. rclone is still in active development, so it can be a bit finicky at times, but hopefully this guide will help you get through all that.

    Before we dive in, a quick word on the backup services market. If you don’t want to pay $60/yr, I would understand, but I would still strongly recommend some sort of backup mechanism for your data. For larger amounts of data, services that charge per GB can get very expensive very quickly, so I would recommend a service with unlimited storage. Other than ACD, the two best options are Backblaze and CrashPlan, both of which I used for at least several months (CrashPlan for a couple years). My primary issue with Backblaze was the upload speed; even after working with their support team, I was only able to get upload speeds of 50-100KB/s. If I only wanted to back up my most important ~2TB of data, at 100KB/s it would take nearly a year to get everything copied to their servers. I also used CrashPlan for about 2 years before building my NAS. The upload speeds were slightly faster than Backblaze (I was able to get ~1MB/s), but still not great. My biggest issue is the backup client’s huge memory consumption. The Java-based CrashPlan client consumes 1GB of RAM per 1TB of data you need to back up, and this memory is fully committed while the client is running. For a large backup size, this is obviously unacceptable. The client itself is also a bit finicky. For example, if you want to back up more than 1TB, you have to manually increase the amount of memory the client can use by accessing a hidden command line interface in the GUI. The final nail in the coffin of CrashPlan and Backblaze (at least for me) is the fact that they are both significantly more expensive than ACD. ACD is not without its issues, as we’ll see in the subsequent sections, but it seems to be the best of all the not-so-great options (granted, at a few dollars a month for unlimited data storage, expectations can’t be all that high).

    Of course the first thing you’ll need to do is sign up for ACD, which you can do here. You get 3 months free when you sign up, so you have plenty of time to make sure the service will work for you. (Note that the Prime Photos service is not what you’re looking for; that only works for pictures.) Don’t worry about downloading the Amazon-provided sync client as we will be using rclone as our sync client. The instructions for setting up rclone are based on a guide (originally posted on reddit) which can be found here.

    Start by SSHing into the bhyve VM you created in the previous step. You’ll want to make sure sudo and ntp are installed are configured. Run the following commands to download (via wget) rclone, unpack it, copy it to the correct location, change its permissions, and install its man pages:

    
    wget http://downloads.rclone.org/rclone-current-linux-amd64.zip
    unzip rclone-current-linux-amd64.zip
    cd rclone-*-linux-amd64
    
    sudo cp rclone /usr/sbin/
    sudo chown root:root /usr/sbin/rclone
    sudo chmod 755 /usr/sbin/rclone
    
    sudo mkdir -p /usr/local/share/man/man1
    sudo cp rclone.1 /usr/local/share/man/man1/
    sudo mandb
    	

    The official rclone documentation recommends placing the rclone binary in /usr/sbin, but by default, the /usr/sbin directory isn’t in non-root users’ path variable (meaning a normal user can’t just run the command rclone and get a result, you would have to either run sudo rclone or /usr/sbin/rclone; more information on /usr/sbin here). You can either choose to run rclone as root (sudo rclone or su then rclone), type out the full path to the binary (/usr/sbin/rclone), or add /usr/sbin to your user’s path variable. I got tired of typing out the full path and didn’t want to have rclone running as root, so I added it to my path variable. You can do this by editing the ~/.profile file and adding the following line to the end:

    
    export PATH=$PATH:/usr/sbin
    	

    This probably isn’t within the set of Linux best practices, but this user’s sole purpose is to run rclone, so I don’t see a huge issue with it.

    The next step requires you to authorize rclone to access your ACD via OAuth. OAuth requires a web browser, thus we’ll need to install a desktop environment (I use Xfce because it’s pretty lightweight), a browser, and a VNC server so we can access the desktop from a remote machine. For this part, I followed the guide here.

    Start by installing xfce4, some plugins for xfce4, tightvncserver, and iceweasel (which is a rebranded version of Firefox):

    
    sudo apt-get install xfce4 xfce4-goodies gnome-icon-theme tightvncserver iceweasel
    	

    Since we’re only using the VNC server once, you don’t really need to create a dedicated VNC user to run the server (per the guide linked above). Instead, simply run the following to start configuring the VNC server:

    
    vncserver
    	

    You’ll be prompted to create a connection password. Skip the optional read-only password. Once the server is started, you’ll get a notification in the terminal. On your Windows machine, if you don’t have a VNC client, go ahead and install one (I like TightVNC; you only need to install the “Viewer”, which you can download here). Open your VNC viewer, connect to your bhyve’s IP on port 5901, and enter the password you created a moment ago. You should connect and see a very spartan desktop environment. Find the iceweasel web browser you installed and make sure you can access the Internet with it. Note the DigitalOcean guide I linked to above goes on to create a service for automatically starting the VNC server; this isn’t necessary for us as we’ll only be using it this one time.

    Once you’ve checked that the browser is working in your VNC session, open up a terminal window on the VNC desktop (not through SSH, do it in the VNC session). In the terminal, run the following command to start up the rclone configuration process:

    
    rclone config
    	

    You should see the rclone configuration menu. Press n to create a new remote and name it; I named mine acd, which is what I’ll use in this guide. On the provider selection section, choose Amazon Drive (which should be number 1 on the list). You can leave client_id and client_secret blank. When prompted to use the auto config, say yes. Per the instructions that pop up on the terminal, if the browser in your VNC session doesn’t open, go to the URL it gives you to complete the authorization. You’ll be prompted for your Amazon login credentials then asked if you want to trust the rclone application (say “yes”). Once you’ve finished everything in the web browser, the terminal should advance automatically to the next section. If everything looks ok, enter y to confirm and you’ll be brought back to the main rclone config menu where you can type q to quit. You can go ahead and disconnect from the VNC session at this point

    The process for setting up encryption for your ACD remote connection is a little counter-intuitive, but bear with me; this is the official (and only) way to do it. It will initially appear that you’re creating a second remote connection, but that’s just the process for configuring encryption on top of an existing remote connection.

    Back in the SSH session with your acd bhyve, run rclone config again. At the menu, type n to create a new remote, and name this new remote something different than your previous remote. For this example, I’ll use acdCrypt as the name for the encrypted version of the acd remote. On the provider selection screen, pick Encrypt/Decrypt a remote (which should be number 5). You’ll be prompted to enter the name of the remote you want to encrypt; if you named your previous remote “acd”, then just enter acd: (include the colon on the end). When prompted to choose how to encrypt the filenames, enter 2 to select “Standard”. You’ll then be prompted to pick a password for encryption and another password for the salt. I recommend letting rclone generate a 1024 bit password for both items; just make sure to copy both of them somewhere safe (I copied them to a text file on my desktop, archived the text file in a password-protected RAR archive, and uploaded the RAR file to my Google Drive). After you’re done with the passwords, enter y to confirm your settings and then q to exit the rclone configuration menu.

    rclone should now be configured and ready to use, but before you start your first backup, it’s a good idea to configure rclone to run as a service so it automatically starts up on boot. We’ll do this by creating a systemd unit file for clone. The guide I followed for this process can be found here.

    Before we create the service itself, run the following to create an empty text file in your user’s home directory (which we’ll need later on):

    
    touch ~/acd_exclude
    	

    Start by creating a new services file and settings its permissions:

    
    sudo touch /etc/systemd/system/acd-backup.service
    sudo chmod 664 /etc/systemd/system/acd-backup.service
    	

    Open this new service file in a text editor, paste the following text into the file, then save and exit (Ctrl+O, Ctrl+X in nano; be sure to edit your share’s mount directory and the path for the log file):

    
    [Unit]
    Description=rclone ACD data backup
    After=network.target
    
    [Service]
    Type=simple
    User=acd
    ExecStartPre=/bin/sleep 10
    ExecStart=/usr/sbin/rclone sync /media/mountdir acdCrypt: \
    	--exclude-from /home/acd/acd_exclude \
    	--transfers=3 \
    	--size-only \
    	--low-level-retries 10 \
    	--retries 5 \
    	--bwlimit "08:30,10M 00:30,off" \
    	--acd-upload-wait-per-gb 5m \
    	--log-file  \
    	--log-level INFO
    
    [Install]
    WantedBy=multi-user.target
    	

    You’ll likely want to tune the parameters called with rclone for your own application, but this should be a good starting point for most people. Full documentation on all commands and parameters is available on the rclone website here. Here is a quick explanation of each parameters I set above (note the \ characters allow the lengthy command string to span multiple lines):

    • sync <sorce> <destination> : Tells rclone to sync from (in our case, the share we mounted earlier) to the (the encrypted remote we set up). Sync will delete files in the destination that are removed from the source; you can use copy here instead if you don’t want it to do that.

    • exclude-from <text file> : Excludes files and paths matching the entries in this file, see here for details.

    • transfers <#> : Number of simultaneous uploads to run, default is 4.

    • size-only: rclone normally verifies transfers by checking both the file modification date and its file size, but ACD does not have an API call that allows rclone to get the modification date. This flag explicitly instructs rclone to only use the file size information. Without this flag, rclone still detects its working with ACD and doesn't attempt to pull file modification dates, so it isn't strictly necessary. (I originally had rclone set to use --checksum, but this doesn't work with an encrypted remote. Thanks /u/martins_m from reddit for catching this mistake!)

    • low-level-retries <#> : ACD transfers tend to crap out pretty often for no apparent reason, so this (and the retries flag below) tells rclone to keep trying the transfer.

    • retries <#> : Same as above.

    • bwlimit <string> : Limits the upload bandwidth based on the schedule; my schedule limits uploads to 10MB/s from 8:30am to midnight and lets it run at full speed overnight.

    • acd-upload-wait-per-gb <time> : From the rclone docs: “Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while.” This tells rclone to wait for a certain period of time (per GB) after the upload finishes for the file to appear. If the file doesn’t appear after that time period, rclone reports the transfer as an error.

    • log-file <path> : Stores everything to a log file at the specified location rather than printing to screen.

    • log-level <ERROR|NOTICE|INFO|DEBUG> : Selects verbosity of log. If you’re having issues, it might be worthwhile to switch to DEBUG.

    I also have the service set to sleep 10 seconds before starting rclone to make sure the SMB share has time to mount. I would highly recommend reading through the rclone documentation (linked above) to figure out which settings would be appropriate for your use case. My filter file (acd_exclude) includes a list of directories and files I want rclone to ignore. Once you’ve got everything set in the acd-backup.service file, run the following command to tell systemd to reload its daemons (you’ll need to run this command again any time you make changes to the acd-backup.service file):

    
    sudosystemctl daemon-reload
    	

    You can start your service with the following command:

    
    sudo systemctl start acd-backup.service
    	

    If you ever need to stop the service, you can run the following:

    
    sudo systemctl stop acd-backup.service
    	

    Note that even though you stop the service, it may not terminate the rclone process; run htop to check and terminate any running processes to completely stop everything (useful if you want to update the parameters rclone is using via the service file).

    You can follow along with rclone’s process by viewing the log file (in the location you specified in the acd-backup.service file). You can also use the following commands to see a summary of whats been uploaded:

    • rclone size <remote> : Shows the total number of files and total size of the . This also works with sub-directories on the remote, i.e., rclone size acdCrypt:Photos to view the Photos directory in the acdCrypt: remote.

    • rclone ls <remote> : Works similar to the Unix ls command, but lists everything in sub-directories as well. As above, you can specify a sub-directory, i.e., rclone ls acdCrypt:Photos to list all photos rclone has uploaded.

    You’ll have to use the log file and these two commands to view the progress of an encrypted upload; if you try to view your files on the ACD website (or using the mobile app), all the filenames will appear as garbled text.

    The final thing you may consider doing is adding an entry in the root user’s crontab to restart the rclone service should it ever fail or exit. You can do this by running the following:

    
    sudo crontab -e
    	

    Add the following line to the end of the file:

    
    0 * * * * /bin/systemctl start acd-backup
    	

    Save and exit (Ctrl+O, Ctrl+X) and you’re all set. This will tell the system to start the acd-backup service on 60 minute intervals; if the service is already started, no action will be taken. If the service stopped, it will automatically restart it. As I noted above, ACD can be finicky sometimes, so some upload errors (particularly for larger files) are normal. With this cron statement, rclone should automatically retry those uploads after it’s finished its initial pass on your share (rclone is set to terminate after it finishes a full pass; this cron statement will re-invoke it, causing it to check the remote against your share and sync any changes).

    Closing and summary

    If you’ve been following along, you should now have a pretty robust file server configured. It should be able to tolerate the failure of one or more hard drives, automatically report on low-level disk and pool errors before they cause hardware failures, heal minor boot and data pool errors, adjust fan speed to keep itself cool, shut itself down gracefully when it loses wall power, back up its configuration files on a regular basis, back up all its user data to the cloud, and run any sort of Linux-based VM you might require for other tasks! Hopefully you’ve learned a few things as well. I happily welcome any feedback you might have on this write up; please let me know if you spot any mistakes, misconceptions, sections that aren’t very clear, or a task that can be tackled in an easier manner. Thank you for reading, and feel free to contact me with questions, comments, etc: jason@jro.io!




    All Site Content and Design © 2014-17 Jason Rose