back to listing index

Building a NAS | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: storage diy nas raid
Clipped on: 2017-04-13

Image (Asset 1/2) alt=


Image (Asset 2/2) alt=
Wow, I actually just got done building a home NAS for myself. As far as storage controllers, I managed to find myself a working SAS 9201-16i for $50 on eBay (16 port SAS card, only supports IT/HBA mode). Also got some cheap used 10 GbE cards off eBay as well for the link between my workstation and the NAS. Got a 4 port card for the NAS to turn it into a cheapo 10 GbE switch (OCe14104-U-TE with four SFP+ fiber optic transceivers for $190 USD).

I actually bumped into rclone and use Amazon Drive for backups myself. I was worried they'd freak out if I stored multiple terabytes of data there, but if you haven't got any emails about nearly 50 TB of data, I guess i shouldn't be so concerned.

A silly toy project i've been working on is to be able to use Amazon Drive as a block device via something like nbd (linux) or geom (freebsd). Would let me use existing drive encryption mechanisms (no personal data scraping for advertising purposes Amazon) and have the ACD storage be a zfs pool in its own right. Basically, a virtual 'SSD' backed by a collection of chunks (1 to 8 MiB, still benchmarking...) and where TRIM deletes remotely stored chunks.


The main issue with used server hardware that is a couple generations old is that they use a lot of power. E.g. it's no problem to buy dual or quad port Intel server adapters for like 20 Euros, but these draw 6 / 10 W, while newer ones (e.g. those based on the i210-series) draw much less than 1 W/port.

Similarly older storage controllers require more power even when idle, older CPUs and mainboards draw more power when idle. Fan control in server boards has improved somewhat, and these can draw a lot of power (....just ask any Sun Fire server....) as well.


Wow 50TB for $60 per month...

Amazon Drive sounds like an insanely great deal for me if it works the way I think it does, but the reason I haven't tried it yet is I honestly don't see them being able to keep storage space unlimited for very long, if it in fact competes in the same real-time sync'd storage space as the likes of Google Drive, OneDrive, Dropbox, etc.

Every such service that I've ever heard of that offered unlimited storage at one point have had to backtrack on their unlimited storage claims after enough users really took them up on their offer. Note I'm talking about services like Bitcasa and OneDrive, that offer real-time sync/access, not backup & archival services like BackBlaze and Crashplan that don't have the same egress bandwidth requirements.

For those who use Amazon Drive and understand it better, does it do real-time sync like Google Drive, OneDrive and Dropbox? Or is it just another unidirectional backup service?


*$60/year :)

If you like Google Drive, their Google Apps plan will give you unlimited storage for a slightly more expensive plan.

One limitation I've met with Amazon Cloud Drive is a 50GB file size limit. Both of these support rclone which is what I use for backups anyways.


50GB size limit kills it for me :(

It's not real-time sync afaik, at least not when I used it in 2016, might have changed since though. You'd have to manually grab files you wanted to open/change and then manually update.

I might get that new consumer-grade switch with 10GbE copper, but that's down the road a bit.

I wonder when we will see 10GbE ports on laptops though!

Promise sells a ThunderBolt to 10GbE adapter

https://www.promise.com/Products/SANLink/SANLink2


What would be the consumer for that kind of flow? It makes sense on switches, but for a single device, it would be hard to find a reason. We can do 4kUHD way under 1Gbps. We don't get over 1Gbps internet at home. We can't write that fast to any permanent medium. Even RAM will slowly start being a problem if the DDR4 bandwidth is only 6 time higher than 10GbE.

Consumers do buy Synology & Others NAS. Right now 1Gbps is the bottleneck for accessing a NAS.

This depends on the storage configuration & use case..

It can be a bottleneck for a NAS-

That said, depending on use case, the extra expense of 10GbE still might not be worth the expense..


There are very few disks you can stick in a NAS that won't yield 100MB/s, particularly in RAID.

I have a 36T NAS (26T usable raidz2), built using the Norco 4224 chassis. The backplane / drive trays that it comes with aren't 100% reliable, so I would recommend trying to get a SuperMicro chassis instead. I keep mine in the attic to keep the noise down.

Mine runs fine on an old Q6600 with 4G of RAM. I'm not trying to do anything silly, like enabling dedupe, so it's not a problem. I'm running ZFS on Linux, again, not a problem with that amount of RAM despite ZoL having a less than ideal caching situation in the kernel.

I do my backups with Crashplan. Not only do I back up the NAS with Crashplan, but I back up my various PCs and laptops to my NAS, for faster restores should I need it. Crashplan supports peer backup, which works well.

I tuned my recordsize by copying a representative sample of files to different ZFS filesystems with different settings, and comparing du output. The empirical calculation seemed easier.

I've used my NAS for a various things; video / photo archive, VM backing store (serving up zvols as iSCSI endpoints), working space for a variety of side-projects. More than anything else, it gets rid of the idea that storage space is in any way scarce, and removes the requirement to delete things (very often, anyway; I built up several years worth of motorcycle commute headcam videos at one point). My pool wanders between 40 and 70% utilization.


36T to CrashPlan sits on 36GB of RAM at all times... screw that. rclone never uses more than ~2GB to manage my whole 50TB dataset.

edit: Also, I'm using a SuperMicro chassis, not a Norco. I've got a section where I go into why I went with SuperMicro.


Uhm, what exactly is crashplan doing with all that memory?

I wonder if it's related to the memory "issue" with their client, where exceeding 1TB/1 million files, a manual edit of the .ini file to bump the JVM memory allocation is necessary. I remember reading somewhere that it has to do with CRC checksum calculation for all the files? I've had to change the setting multiple times (currently at 8GB for ~8TB/1 million files).

https://support.code42.com/CrashPlan/6/Troubleshooting/Adjus...


Hm, maybe give Borg a shot. For initial backups it's not exactly the fastest thing, though much better after that.

> rclone never uses more than ~2GB to manage my whole 50TB dataset.

How do you handle file versions with rclone?


I guess that's the downside. It only syncs the latest version of every file it syncs.

I use quite a bit of Supermicro hardware and have to conclude that it's just good stuff (tm). Recently had to work with an Intel server board, ick... no comparison at all.

How much does it cost to build and run a system like that? What does "my pool wanders between 40 and 70% utilization" mean? I'm assuming you're not talking about free space, because you say you don't delete things.

I don't delete things for a year or more at a time. I am talking about free space.

Cost is in low thousands, I haven't tabulated it recently.


Are you backing up all 36T to crashplan?

I've been using crashplan on my NAS for a few years now, but I feel bad and only really backup critical stuff. (aka its just a few TB) rather than the full 50T+ on the NAS.


No. I don't back up backups, VM zvols, or media that's easily re-aquired or doesn't have very high value (eg commutes). And of course the NAS isn't full.

I rebuilt my home NAS a few months ago. Former one was a huge Debian tower box with one system+temp storage disk plus 3 soft RAID1 pairs. It worked well for about 6 years, save for a partial failure due to bad disk connectors (SATA connectors are among the worst junk ever designed by a human). CPU was a low power Celeron more than enough for the task. When building the new one I wanted to reduce to the minimum both power consumption and hardware maintenance hassles so I got on Ebay a used server 2U rack box including a good quality PSU, then a Mini ITX industrial Atom main board with 4 SATA ports. Not being that familiar with BSD and derivatives, initially I wanted to stay with Linux, but this time I didn't want to fiddle with mdadm and other stuff so I gave Openmediavault a try; I probably did something wrong, but the experience was frustrating, from menus taking forever to appear to errors that shouldn't be there in a self contained system, so in the end I turned my attention to NAS4Free and never looked back. The only problem being the much greater RAM requirements of ZFS, which makes the maximum RAM on that board (4GB) barely enough, but in the end it works flawlessly as I keep running services to the minimum (NFS, SMB, Transmission). Total expense was very low as everything was purchased used, save of course the 4x3GB WD RED disks. The system boots off a USB key which is plugged directly on an internal USB port, so there are no dongles attached to the case.

For OMV, you did something wrong. I've been using the project since its first public release. Never had any real issues.

Yup, I'm completely sure about that as nobody else seemed to have experienced the same problems, but after trying to reflash a couple more times I eventually gave up. That doesn't mean I won't try on a different system in the future though.

Oddly, I recently went the other direction. In the past I've built some storage servers similar to this, and 6 months ago I took 6x 2TB laptop drives in a 6 drive "mobile rack" that fits in one 5.24 bay in my desktop (IcyDock MB996SP-6SB).

Loving it!

Now, my storage needs are fairly modest in comparison to the author. I'm running with RAID-Z2 and at 44% capacity. I have this unit in my work workstation, at an external office. It is quiet and cool.

I used to have a big house with a room I could put a bunch of computers in. I moved to a smaller house, but more importantly I was just tired of managing a business-class infrastructure at home (VLANs, multiple APs, UPSs, batteries, patch panel, servers, etc).

So I copied a backup of my storage server from an off-site box, to S3 with Glacier, copied the primary to this ZFS array on my workstation, removed junk I was just holding on to, and now my home infrastructure consists of a Cable Modem and Google WiFi mesh. Huge improvement in maintenance!

Here's a blog post I wrote about one of the previous incarnations: https://www.tummy.com/articles/ultimatestorage2008/


> I took 6x 2TB laptop drives in a 6 drive "mobile rack" that fits in one 5.24 bay in my desktop (IcyDock MB996SP-6SB)

Thanks, I didn't realise these exist.

I've been wanting to create a DIY version of Synology's "slim" model line [1] for a while, but haven't been able to find an off-the-shelf enclosure that would fit my needs. Putting six drives in a 5.25" bay is a fantastic idea.

[1] https://www.synology.com/en-global/products/DS416slim


I came across this product recently: https://www.crowdsupply.com/gnubee/personal-cloud-1 which is seeking crowd funding at the moment. Personally I think they've made a mistake by going for 2.5" storage first with 3.5" coming later. Anyway, it sounds like it might be of interest to you?

That looks really great if one's concerned about the FOSS nature of their storage. Thanks for the recommendation!

I'm not sure if an ARM chipset would do justice for a multi-disk NAS setup though (or if the GnuBee would support RAID5/6 — it does seem to, with LVM/mdadm). My experience with consumer ARM-based NASes was that they suffered on transfer speed. I ended up going with a 4th generation Intel Pentium chip on a mini-ITX board, which offered the best compromise between price, performance, and power consumption for my use case.


I like the design and 2.5" choice. But if it doesn't support zfs it's no go for me. If you (abstract) are not concerned with bit rot you doing NAS wrong. Something like that running FreeNAS (w/o dedup) would be ideal for me.

> But if it doesn't support zfs it's no go for me. If you (abstract) are not concerned with bit rot you doing NAS wrong.

What are your thoughts about Btrfs?


I agree. A 3.5" version of that is exactly what I need.

You might want to check out Silverstone's DS280 case [1], a Mini-ITX case that supports 8x2.5" hot swappable drives.

[1] http://www.silverstonetek.com/product.php?pid=668&area=en


Oddly enough, I had gotten these drives originally a couple years ago to go in a 1U Supermicro "twin" machine, with 8 drive bays split across two physical machines in a 1U chassis. Then the cheap hosting for that fell through and they just sat in a box. Came across them while cleaning the garage and put them into service.

I think these are the Toshiba drives. Seems like 2TB is still as big as you can go on laptop form factor spinning drives.


> Seems like 2TB is still as big as you can go on laptop form factor spinning drives.

Seagate has 4TB drives now (and I think they're still shuckable), but they're 15mm in height.

I read somewhere on Reddit's r/DataHoarder that they might be moving away from a SATA connector on their portable drives though…


Not quite as slim, but I recently put a super micro d-1520 motherboard in an old intel ss4200 case that's relatively compact (could theoretically put it in a 30l backpack) and it holds 4 desktop hard drives: https://forums.freenas.org/index.php?threads/another-x10sdv-...

I recently built a NAS with 4x 4TB 2.5 inch drives

I originally posted this on reddit, but I'm happy to see it reposted elsewhere. Let me know if anyone has any questions.

That's a huge setup :)

One questions I have regarding NAS is backups of the NAS itself.

If I have a very simple NAS with 2 drives in Raid 1 (let's call it Drive A and B), and I want to make a physical backup of my NAS in a different location. How easy is it ? What is the best practice. Ideally you could just have a big rsync method that takes care of it, or rclone as described in the article, but what if you want to do it without any network transfer (because your connection is too slow/ you can't afford it)

Does the following protocol make sense: Removing Drive A , replacing it with an empty drive C. Wait for the NAS to synchronize drive C. Then remove Drive B, replace with drive D, wait for the NAS to synchronize drive D.

Taking Drive A and B to a different location, plug them, and have the backup working out of the box.

Is it that easy? What about more complicated RAID setups?

is there an easy "Prepare Backup -> Please insert First drive for your backup -> First drive filled up -> please insert second drive for your backup -> ... -> Please insert last drive for your backup" and then you take all those newly filled drives and shove them in a different box and they have all the data at the time of the backup, with either the right zfs and raid configuration, or at least a simple data dump in a non-raid configuration.


If you're using ZFS, the right thing to do is to attach drives C and D temporarily, create a second zpool on them, them use zfs send/receive to replicate snapshots from the primary drives to the backups. You can then export the zpool and move it to a different location.

Refreshing the backups is either done by putting the backups online at the remote location and syncing the deltas between the last snapshot and the current over the net, or by bringing the drives back to the primary, and sending the deltas.

The sanoid/syncoid toolset will help immensely with handing the necessary zfs snapshot and send/receive commands: https://github.com/jimsalterjrs/sanoid


If your NAS is small enough, you could get a couple of external drives and rotate them.

I've got a QNAP 4-drive NAS and the stuff I want to keep backed up fits in under 2TB. Based on that, I've got a 2TB external drive and have set up a backup job to sync the stuff I want to keep to that drive every morning.

There's no reason you can't get two drives and swap them every week. Just set your backup job to run weekly and unmount/eject the target drive when complete. Every Monday you just grab your drive and take it to work, and every Friday you bring the old drive home and plug it in.

Of course, any backup that requires manual work is likely to get neglected eventually. "Oh, I haven't done much work in the last week, I don't need to take the drive to work today" is something I've told myself all too often when it comes to my Mac backups.

Automated is better. If you have a second NAS at a remote site, you should be able to use ZFS send/receive to update a remote NAS over the internet.


Am on a similar raid1 setup -

For me, I have 2x extra drives and a USB caddy, and rsync the array onto the caddy automatically, keepping the unused drive offsite.

This does mean that the wear on the live array is higher than the offsite ones, and I have to have 4 drives total, but since RAID1 with a traditional filesystem doesn't provide integrity protection (e.g. bit errors on 1 drive can cause silent corruption), I don't have to worry about subtle raid rebuild issues gradually propagating through the entire raid set.

The 'cheaper' version would be to only have 1 offsite drive, but that means my data on the raid array is only protected from severe failures up to the last time I ran the sync.

Longer run, I'm looking at moving up to something with integrity protection, but since my server is OpenBSD (less storage configuration options), this means RAID5 which was only recently OK'd for rebuilds, and soft-raid5 rebuilds take forever on spinny drives - Will probably wait and upgrade to 4x SSD's 1st or get a hardware raid card (my data set is fairly small).

Another thought I had was to setup a raspberry PI at a friend/relative's place and have an Rsync run nightly to it, and offer same to them... but haven't gotten around to it.


If you really don't want to go over ethernet (and I'm assuming fiber is out of the question, too), there aren't really any great methods to automate this process. If you don't want to pull drives and put your data at risk, you might be able to add a third drive to the mirror, let it replicate, then disconnect it and plug it into the second system. If you're on FreeNAS, check out this section for guidelines on how to add an extra drive to the array: http://doc.freenas.org/9.10/storage.html#replacing-drives-to...

...but really, you should figure out a way to go over ethernet. Even if it takes a really long time, it'll be soooo much easier to have everything automated. You also don't have to risk pulling drives, etc...


Thanks for the really detailed ZFS allocation write up. I've recently been building ZFS storage systems on Ubuntu 16.04 (one in excess of 800TB), and I've poured over the various ashift/recordsize options and spreadsheets, but your description was quite useful in cementing my understanding of why the large recordsize is useful.

I'm glad you think so. I also found it was extremely difficult to find a thorough explanation of the types of overhead I described, so I wanted to get it all together in one place. Working through it while writing it also helped me understand it much better... it's a pretty abstract concept, especially when you factor in compression, etc.

Your crab photo is amazing

http://jro.io/photos/wildlife/


Hah, thanks. Galapagos is an amazing spot for wildlife photography.

Got to ask about the elephant in the room, but what do you use it for / host?

Personally, I'm thinking of building a FreeNAS for photo storage / media, but unfortunately I've only got a four-drive array SuperMicro rack server (spare one I have from decomissioning a business).

Other things I'm interested in is perhaps hosting a few gaming servers and such for friends/family (Minecraft, etc.)

Hard to find uses for it other than offline-lowpower-media-storage.


I do a lot of high resolution photography and videography work. I've got about ~15 years of raw images and video files that I could probably go through and delete, but I'd rather keep them around so I can pretend that maybe some day, someone will be interested in looking at them.

I look at it as a modern version of the boxes full of pictures and slides my parents and grandparents kept in their basement for decades and never looked at.


What's your backup strategy?

My experience with the FreeNAS people was "build a second nas", but that always struck me as stupid for a home setup.


(I meant to reply to you several hours ago, but HN wasn't letting me post comments for some reason...)

I've got rclone set up to encrypt and upload everything to ACD. There's a section at the bottom of the article that goes into some depth on this and some other backup strategies I've tried in the past (including CrashPlan, Backblaze, and Zoolz, all of which are awful). Check http://jro.io/nas#rclone . I never considered building a second NAS, it does seem pretty stupid, even for an enterprise setup. The whole idea is to get the data off-site.

As a side note, some of the so-called "FreeNAS people" can tend to blindly parrot a given general guideline without really understanding the reasoning behind it or why it might be perfectly valid in certain situations to disregard it. For instance, ask them about bhyve and I promise you'll get at least one response along the lines of "bhyve isn't officially supported in FreeNAS so you shouldn't use it under any circumstances, period."


That is actually the cheapest way over time.

If you're able to tag stuff you know you want to keep and it's a smaller set you could look in to something like Backblaze B2 (previously a featured story on HN); the storage costs are relatively moderate, but restoration from it will cost you.

I haven't yet heard of any solution along the lines of "Rent a (large) NAS for a month" for those times that you're upgrading your array and need to switch filesystem formats. Having that option would make the juggling much easier and safer. Looking at the S3 storage and bandwidth costs I imagine there is actually a market to be served by such a product.

Maybe renting one of those higher end tape drives makes sense... but I can't get over the idea that even renting a stack of hard disks would be cheaper and more effective at this scale.

PS: Make sure you encrypt all of the data going in to the temporary storage; those aren't your disks.


You still have to put the second NAS somewhere, preferably offsite to avoid the house-burns-down fail case. Personally, I don't even know where I put this second NAS, since it would need power and bandwidth, unless you really wanted to sneakernet a NAS, which would work, but seems really weird.

My home nas is small enough where I can reliably backup to some USB external drives and store in a drawer offsite. According to FreeNAS, that's a horrible solution because USB is too error prone and moving disks shortens their life, and blah blah blah, and so USB backup is explicitly a WILLNOTFIX, and a sign that the requestor is stupid, as opposed to knowing full well what the risks are, and is satisfied with them. The horrible FreeNAS community, and the lack of this feature was why I adopted OpenMediaVault. (I highly recommend OMV.)

I guess I could always upload TARs to Glacier. That might be a legitimate solution.


> You still have to put the second NAS somewhere, preferably offsite to avoid the house-burns-down fail case.

Think pretty much the only viable solution for this for home users is to have a 'peering agreement' with a trusted friend where you each colo the others machine at your home.. however this can be tricky because you're sticking all of your sensitive stuff in someone elses house and trading some level of full network acess to each other - though I suppose trading access to some kind of encrypted rsync-like dumps or similar might work without some of those risks being too high


Why not use borg/attic to create an encrypted/HMAC'd backup on the USB drive?

Glacier is not a good primary backup. It's not designed for that. See https://news.ycombinator.com/item?id=10921365

Without ZFS (as far as I can see) OpenMediaVault is failing at its main job i.e. keeping data safe. That's not the tradeoff I would make.

I have a good friend on the other side of the hill in town. We cross-backup our NAS systems to each other.

Its hackish (needs setting up bhyve/ubuntu vnc, don't bother with the plugin), but crashplan central can be used with freenas.

Thank you for shamelessly copying the steps in the Bhyve video I created. :-)

You'll be pleased to know that in the 9.10 nightly train, the base OS has been upgraded to FreeBSD 11 and the GUI has a UI for Bhyve, also enables VGA consoles.


Hah, I have no shame m0nkey! They're great videos.

So the 9.10 FreeNAS GUI will have bhyve management stuff in it? Or did I misunderstand that?


It's in the nightly releases, so it'll likely come to the stable train at next release.

Amazing setup and even better write up. It shames my tiny 2x 4tb setup.

My only suggestion would be to try out gaffer tape instead of the duct/masking tape combo. I started using it about a year ago, and haven't looked back. No residue and no dry flaking over time.


I already have a FreeNAS system, but I have been thinking about upgrading a few parts, like the chassis. Right now it sits in a cheap Tower, and doesn't have enough drive slots for my liking. So this is a really interesting guide.

I'm glad to hear it! I've had a ton of fun working on it, but just know that FreeNAS is a slippery and expensive slope!

What is the noise level like? Could you comfortably keep this in the living room without aggravating a spouse or visitors?

Just replied to a similar comment below with: "I have this system sitting next to me in my home office which I share with my wife. Both of us work from home full time and the server is quiet enough that she doesn't complain about it."

One thing to consider is that I'm in the northeast US (philly) where the ambient temperature is generally lower than other places. I've also only had this server running through the colder months (built it in fall), so it may get noisy as the house gets warmer in summer. I would definitely not want this server in my bedroom or living room, but so far, it's been okay in my office.

edit: I should also point out that, regarding the server passing the "wife test", she tends to get hyper focused on whatever task she's working on, so background noise doesn't bother her as much as it does other people. YMMV!


What's your power consumption like on this set up?

Sits at about 250W, goes up to ~300W when it's running a scrub or something.

I generally run linux on my NAS, but power consumption is always one of my big priorities for my home NAS. I also shoot for the lowest possible idle power. I don't care if it consumes 250W when I'm using it, only that when I'm not using it it better be just a few watts. That is why i traditionally use Atom (c2000) or ARM hardware. Combined with disk spindown (custom script), turning off everything not in use (extra network ports, etc) My idle power is ~25W. I've only got 12 disks but the big power draw right now is the marvell SATA chips on my motherboard, which don't have any kind of link power saving controls (that I've found yet). I've got another motherboard using a G4400 that idles closer to 12W (total system power), but doesn't have ECC or a BMC..

Anyway, I use a little script which monitors activity against my MD device (yah software RAID, another discussion) and sends full spindown commands to the drives. I've been experimenting with a full drive powerdown, and full port power down (different machine). If I weren't using the NAS as a DHCP/DDNS/etc server I would probably put it into S3 standby and then wake it on CIFS/NFS/RPC inbound.

I'm also using cheap x540 based 10G boards, which add about ~5W a port, but I turn them off on an idle timer and fail-over to a 1G port that I can't measure the power on.

Bottom line, a home NAS device isn't a server that needs to run 24-7. Its not hard to tweak stuff to pull the idle power way down. Given a long enough timer (say 1-2 hours) you will only notice the machine resume/spin up once a day when you initially sit down at your desktop.


To put that in perspective it's roughly $0.60 a day, $18 a month and $219 a year USD assuming a rate of $0.10 per kilowatt hour [1] plus an additional 853 BTU/hr cooling load on the AC system [2].

[1]: http://www.rapidtables.com/calc/electric/electricity-calcula...

[2]: http://rapidtables.com/convert/power/Watt_to_BTU.htm


Rates in California would be double that or more, depending on which tier this put the user. For me, my old 8-cpu Xeon server cost me ~$50/month in electricity so I decommissioned it, however, this would also push me towards wanting to get solar panels for the house.

Did you consider a low power NAS approach? Spending somewhere around $20 per month on electricity for a home server seems like a lot to me. Maybe if it was hosting a handful of VMs for various purposes, but I suppose you don't want to run a NAS as a VM?

Nope, never considered low power. I really didn't want a CPU bottleneck when I move to 10Gb network interface. Running a NAS in a VM (at least FreeNAS) is highly discouraged, so I never considered that. I do, however, run several VMs on the NAS itself, which help to justify the overkill hardware I used.

From what I've seen the nas is used for video editing, so you'll need fast access that only expensive internet connections can provide.

as a comparison, s3 for 60tb is $1600 a month.


well there are tradeoffs. If you want low power and high performance, you come down to choosing between either SSDs (very expensive) or a lot of laptop drives (cheap, but higher latency; throughput for something like video is fine though as you can fit a lot of them in a chassis; switching from 8drive Z2 groups to mirrors gets you a 4x improvement in throughput, which completely negates the throughput disadvantage of 5400 vs 7200).

Why not going for less but larger disks then? It would be more power efficient. You have consumer 8TB disks. Surely you don't need that many disks for performance. Pretty much any Synology NAS will saturate a 1GbE link.

How do you do backup?

rclone/crypt to amazon cloud drive (acd)

I own HP Microserver Gen8. It allows to use 4 disks, it's a real server (ECC memory, hardware watchdog, KVM over network and much more), it's extremely quiet and it costs $200 with Celeron processor. Also it's quite outdated at this moment and requires using "hardware raid" (implemented in software proprietary driver) for optimal cooling, it's awesome machine and I didn't found anything better yet. I only wish HP would continue this line.

I have a previous generation N40L and highly recommend this line of products for people who can get away with 4 or fewer disks. It's well-designed and a solid product. The loudest component is the HDDs, followed by a case fan that occasionally acts up. The CPU is pretty slow at this point but I only ask it to do RAID10, not RAID5 or ZFS.

I'm still using one of the previous generation HP N54Ls. Great pieces of kit. The Gen8 was a slight step backwards in some ways insofar as loosing the standard 5.25" bay which was handy for installing 4-6 2.5" drives. Agree that HP should definitely keep evolving this, but seems like they aren't interested. Plus they have taken an actively hobbyist-unfriendly stance with regard to access to updated drivers now requiring a support agreement.

Why does it require using hardware raid for optimal cooling? I set up one of these recently using AHCI (disabling the hardware raid controller) with mdadm.

you can always upgrade the cpu if you find the performance lagging.

I used to build my own NAS systems; and it's fine if it's a hobby; but it's a lot less trouble and a lot less stressful (when you actually have data to care about) to use something ready made like a QNAP. I switched to a QNAP TS-863U a few months ago, and boy is it great to have a UI that will let you do RAID upgrades/resizing without having to lay awake at night wondering if you chose the right incantation of mdadm out of the 3 or 4 you came up with that could work.

Commercial consumer NAS boxes are notoriously bad wrt security though, even firewalled. Apple Airport Extreme is probably the only one with a decent track record (and I say this as someone who doesn't use Apple products).

Eg QNAP requires you to install security patches manually, and as a result had a ShellShock worm exploiting QNAP boxes: https://threatpost.com/shellshock-worm-exploiting-unpatched-...


That report isn't quite accurate; you can install updates automatically on the device I have (I don't know if there was some point in time where that wasn't possible; seems like a basic feature). Some require reboots though, which is why by default updates aren't automatic. Who sets their servers on fully automatic updates (and reboots) anyway? I mean, even forgetting about the reboots - you don't want fileservers to disconnect all clients at any random point in time because it has to install some update to the service software.

This kind of NAS should definitely automatically update and reboot rather than become wormable.

A high-availability file server is a different beast from a home NAS.


The one I have is not a home nas. It's a rack mount machine with a bunch of 'enterprise' features, and marketed/sold as such.

People always make such a big deal out of getting ashift "right". In actual fact, there is NEVER any valid excuse to make ashift any smaller than 12. It should default to 12. It is a scandal that it still defaults to 9. You aren't going to find a single hard drive in the world that you would ever use in ZFS, which does not have 4k sectors, papered over as fake 512 byte sectors in the interface.

In FreeBSD, there's a sysctl for setting minimal ashift to 12 and ignoring 9 reported by disk. I always have this one in my /etc/sysctl.conf:

vfs.zfs.min_auto_ashift=12

(This must be done before creating ZFS pool.)


People are going to hate me for this, as it's not a Linux solution... but after trying various *nix NAS distros and being disappointed, I settled on letting Windows do it for me:

- Get a generic small tower box with 3-4 year old hardware in it via eBay (or a business IT clearance site)

- If it's not got Windows on it, find a dirt cheap copy of Windows Server 2008 (again eBay etc).

- If there's no SATA RAID on the motherboard (unlikely), get a 4 to 8 port PCIe RAID card.

- Thrown in a bunch of identical disks of your preferred size

- For Raid 1: In Windows mount them all, and create Mirrored sets in software (via Diskmgmt)

- For Raid 5: Either do it via the BIOS (if supported) or via Diskmgmt (however RAID 5 in software is quite slow).

- Create file shares (SMB/CIFS/FTP etc).

- Job done.

I have this as my main File Server, and unless I'm hammering the box from multiple clients simultaneously I get max Gigabit throughput on all file transfers.

Also, and this is the big bonus for me, Software RAID 1 in Windows doesn't create funny disk volumes, so you can break the mirror and still access all your data from the remaining drive(s) - I've seen horror stories of bespoke partitioning in commercial NASs, and people losing data when the motherboards die - I don't want that ever happening to me.

Finally - Windows Server also supports iSCSI, so you can just keep adding new boxes with disks in, all presented via the same File Server.


Out of curiosity, how do windows filesystems for solutions like these compare to ZFS in terms of reliability and features ? ZFS has been my only choice for NAS servers at home primarily due to it's data integrity features.

The combination of Storage Spaces and ReFS is pretty nice. Drives are added to storage pool and then block devices are allocated out of the pool. The blocks devices are thinly provisioned and expandable, so it's more flexible than it sounds. If you create a block device that has redundancy (either mirroring or RAIDZ-like parity), you get the data integrity like ZFS. I beleive the system also does periodic scrubbing.

One advantage of this setup is you can have multiple block devices of differing parity on top of the same pool. The pool is also more flexible than ZFS, allowing you to add drives of different sizes and later remove drives.

The main thing I miss from ZFS is the ability to create snapshots and do zfs send/receive for backups. Also not being able to read the source code is a bummer.


They don't, they are equivalent to ext4 (ie. no error checking, dedupe etc...).

Deduplication went into NTFS on Windows Server in 2012.

> (or a business IT clearance site)

Any link please? Googling just returned a bunch of results about the meaning of clearance vs sale and why business do clearance sales.


Nothing beats ZFS, I'm afraid.

Whenever I see one of these periodic postings of people using FreeNAS, and they come up on HN every few months, I'm always looking for the answer to IMO the most obvious, most basic question:

   Why FreeNAS; why not vanilla FreeBSD?
I've never seen a good, detailed answer to this. Mostly the response is along the lines of: "well, of course use FreeNAS, after all you're building a NAS!"

From what I can tell, FreeNAS offers a pretty GUI and some tuning on top of FreeBSD. Anything else?


Because I've never done this before and I wanted a more manageable learning curve. I had very little *nix experience and zero bsd or zfs experience before I started this project. My next server will probably be vanilla bsd, but then again it might be freenas because it just makes the whole setup process easier.

I'd say that people recommend it because it's mostly just plug-and-play and works out of the box. You don't need to learn much about how RAID or ZFS works or understand every little detail about nix administration. If you follow the guide it's pretty hard to go wrong.

I however wanted to learn all the details when I set up my NAS, so went with Debian Jessie and BTRFS. I only use AFS and NFS, and I'm the only user, so don't need half the features FreeNAS provides. Graphs are pretty, but I can quickly get all the information I need via SSH.


Having a GUI that is easy to setup is definitely a plus. My experience is that along with setting up, we also would like to add some monitoring capabilities that would send us some alerts about the disk failures or usage threshold stats. I have been part of the team that built ElastiStor (ZFS on FreeBSD), http://www.cloudbyte.com/.

People like that out-of-the-box GUI experience. Yeah, setting up NFS/AFP/Samba is easy but there's also monitoring and snapshot management (like https://github.com/jimsalterjrs/sanoid) and possibly external backups…

System is probably good for 3 or 4 years, right? If your cost of funds is 6%, that works out to: 3 years: $177/month 4 years: $136/month

Add $20/month for electricity: 3 years: $197/month 4 years: $156/month

Could that rent respectable enough device(s) in the cloud?


Could the "cloud" keep up speed-wise? Your calculations should include the added cost of gigabit internet to even come close.

BackBlaze B2 is probably the lowest price S3-like cloud provider and for 50TB (my current dataset size), it would be ~$250/mo: https://www.backblaze.com/b2/cloud-storage-pricing.html

Granted I didn't do this math before I dropped a few stacks on hardware. I really wanted to build it, configure it, play with it, etc. It's been a really fun project for me, and I've got a bunch of other stuff I want to do with it in the next few years (10GbE, X11 nodes, set up dedupe, etc).


Getting 60TB of usable space in the cloud is pretty expensive. Honestly I have no idea why he needs so much space, but if he does need it all, he probably didn't make out too badly. One of the main benefits of the cloud, though, is that you only pay for what you use, and he has to over-provision from the beginning, so let's consider that.

So he's currently using about 10TB of 60TB of usable space. If he uses Amazon S3's standard storage, he would be paying about $230/mo. If he uses infrequent storage that is $125/mo. That goes up as his usage goes, so when he's using 30TB that will be $690/mo and $375/mo respectively. He also has the benefit of high speed ethernet with the home NAS, unless he has fiber 1Gig internet, in which case speed is probably a wash. I'm not sure if there are other significantly cheaper cloud storage solutions at that scale.

So I'd say he hasn't done too badly for himself, though he probably could have saved some on the hardware by getting cheaper parts.


> So he's currently using about 10TB of 60TB of usable space.

Other way around; he's using ~50TB of ~60TB with ~10TB free.


> Getting 60TB of usable space in the cloud is pretty expensive.

Amazon cloud drive is unlimited and 60$ a year.


I did a smaller, simpler build with 8 x hotswap-bay SATA drives running off two 4-port PCIe SATA controllers. Not counting drives, it came to about $550.

http://www.mrbill.net/nasbuild/

I originally started building it around a Dell YK838 SAS 6i PCIe controller card, but got tired of fiddling with using other systems to reflash the firmware and then having to mask off pins on the PCIe connector to get the system to even POST.

I replaced all the "factory" fans, as well as the two on the back of the hotswap drive modules, with Noctua equivalents.


Wow, this is an amazingly detailed blog post. I only skimmed it, but since I'm in the process of setting up my own NAS I'll definitely have to revisit it and read slowly.

Is anyone here using FreeNAS Corral? I've been toying around with it for a few days, and its been a frustrating experience. My first sign that I should've avoided it was that trying to install 10.0.3 with UEFI just fails. With 9.10 they have a nice docs page with lots of details and info, but with Corral they just threw it all out. The web interface provides no help, and the cli just gives a brief sentence which is usually of little help. I can see the long-term potential in Corral, but right now it feels like a rushed out beta. Should I just install FreeNAS 9.10 instead, or is it worth sticking with Corral? Are there any other OSs that I should try out?

Since we're on the subject of home networks, what are people's thoughts on running FreeIPA and FreeRadius? I'm hoping to use it to setup a home VPN server, as well as provide a means of performing authN/authZ for multiple "personal cloud" applications. My goal is to reduce my reliance on cloud providers, since I've grown increasingly uncomfortable with their practices and the loss of privacy.


They have recently updated the download page for FreeNAS Corral to say in large letters "NOT FOR PRODUCTION" and after using it for the last month, I can see why. A good example of some issues, would be found here: https://forums.freenas.org/index.php?threads/10-0-3-problems...

After a lot of consideration, I went with Corral over NAS4Free because the former offered the (FreeBSD native) Bhyve hypervisor for VM's whereas NAS4Free has an integrated Virtualbox instance.

After yet another issue with basic functionality in the Corral release - and reading about even worse experiences on the forums, I decided I couldn't trust this product with the most crucial role in my environment and installed NAS4Free instead.


You didn't say how fast it is?

That many drives, you should be able to saturate a good part of a 10Gb link. The difference for streaming read/write loads (like copying movies..) is night and day vs 1Gbit. That is assuming you've got enough CPU to run ZFS that fast. That is part of the reasons I stick to linux MD/xfs. Every time I try to use ZFS I run out of CPU or RAM. Also, given that I want low idle power means I'm not willing to throw enough hardware at it to make ZFS run well.


It'll saturate a 1Gb link, obviously. I've been eyeing 10Gb configurations for a while, that will be my next upgrade. I'd really like to stick with copper so I can wire the whole house for 10Gb and have some super fast X11 nodes, but we'll see.

BTW: I'm running 10GbaseT at my house, and every single foot of it is cheap cat5e. Its rock solid (yah I can monitor mangled packets drop rates on my switch). I've had a lot of people talk down copper 10G and tell me things like its not possible to use anything but cat6A or better. Which is a load of BS if your runs are well under the 200M that is possible with cat 6A. I think my switch actually says that that it supports runs of 45M on cat5E, which is probably still 2x the max run in my house.

Frankly, the asus XG-D2008 and a few Chinese x540 boards (~$100 for two ports)with cat5 cost less than some of the fancy home AP's and is well within the prices I paid for my first 1G hardware. Two workstations and a server should be less than $600.

Adding to this this, there is the Ubiquiti US-16-XG which also has a bunch of SFPs for under $600.


The problem with these NAS is noise. I see on the pictures that the NAS is just next to his desk. Even if you get a quiet chassis (rack servers tend to be noisy as hell, but synology chassis are pretty quiet), having 12+ disks running next to you is very noisy too. I am looking forward the time SSD will be cheap enough to be used for mass storage.

I have this system sitting next to me in my home office which I share with my wife. Both of us work from home full time and the server is quiet enough that she doesn't complain about it.

Use Noctua fans.

Has anyone taken a deep dive look into rclone crypto? I looked around but didn't find anything in a quick search.

Only thing I've seen so far is https://rclone.org/crypt/


Not any more than this HN comment [1] and its replies, one of which is mine. It's using golang's own extensions for secretbox [2], a NaCl-secretbox-compatible implementation, and it's adhering to the uniqueness of nonces as far as I can tell [3].

In other words, to my non-crypto-expert eyes, there is no glaring misuse of the "golang.org/x/crypto/nacl/secretbox" API that jumps out at me; I haven't looked at that package to see if it's okay.

[1] https://news.ycombinator.com/item?id=12398303#12398727 [2] https://godoc.org/golang.org/x/crypto/nacl/secretbox [3] https://github.com/ncw/rclone/blob/master/crypt/cipher.go


Eh. Cool. To much work. I just bought a FreeNAS Mini because I am lazy. But this very cool.

been thinking of DIYing a freenas mini xl.. can save about $500 by building with the same MB, but don't like the case options.. not sure what case iXSystems uses, or where they get it (probably custom)... but with the same MB and an 8-drive itx case, came out around $900, but concerned about the clock/brick issue with that CPU.

The 8-drive version is built around an Ablecom CS-T80 case.

The 4-drive version is a SuperMicro 721TQ-250B system.


I am thinking of moving to their 8 bay system. The case is key. I will have a look at my case and see if I can find a brand.

I've had two of those chassis fail over the last couple years... would not recommend.

Fail how?

how the fuck does a metal box fail?

What's your cpu utilization like? Xeon E5-1630 seems a little overkill

Oh, it's totally overkill, but I don't have any regrets. I have a couple VMs that can sit on a lot of CPU time when they're in active use.

The article doesn't mention it but it is probably for deduplication.

This processor supports RAM 128GB+. ZFS dedup uses a lot of ram.


I like the creative use of what appears to be an Ikea table for a rack.

That's actually a fairly common setup, known as a LackRack [0], although this one seems to be a different size/model, but maybe that's my eyes playing tricks on me.

Edit: actually now that I look back at it I'm almost positive this is the deeper lack "coffee table".

[0] https://wiki.eth0.nl/index.php/LackRack


Enterprise Edition, baby! It's an official variant.

By the way, if anyone is considering deploying their own LackRack, I would highly recommend reading the installation section in the OP. It's got some quirks that are worth considering before you dive in.


Wow.. awesome!

I wish I'd have known/thought about that years ago when I bought a metal rack which i subsequently ditched due to space constraints.. though at that point I was young/stupid/wanting 'cool'/enough that I probably wouldn't have cared..


Ah, the LackRack™.[0] One of the cheaper home solutions for server equipment - scales nicely, too. Just like anything remotely workable, of course, it gets no industry interest.

0. https://wiki.eth0.nl/index.php/LackRack


It's stupid overkill but I plan to do this using CEPH. The hard part is finding the right low power high performance server. Suggestions welcome.

Incredibly comprehensive article. Thanks for sharing.

First thing that jumps out at me, though: in the photos at the top, the UPS is on the floor.

Get that UPS off the floor! When your house suffers minor flooding (burst pipe, overflowing toilet, leaky roof, etc), it will be sitting in it. You think it won't happen--but then it does, and if your electronics on the floor don't get damaged, you're lucky.


Good call... I'll prop it up on something soon. This is on the second floor of my house, so it's safe from flooding, but I'd hate to have spilled water kill my stuff.

> This is on the second floor of my house, so it's safe from flooding

Not if there's a toilet on the second floor. I speak from experience. :( Neighbor's upstairs toilet tank burst while she was at church. Couple hours later, it's raining inside my apartment and her whole place is ruined. I was very lucky that my computers didn't get ruined. 50-cent piece of plastic connecting the tank to the supply line caused thousands of dollars of damage.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: