Jay Taylor's notes

back to listing index

Show HN: Minio – S3 Compatible Object Storage | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: s3 minio s3-compatible news.ycombinator.com
Clipped on: 2017-04-24

Image (Asset 1/2) alt=
Image (Asset 2/2) alt=
Hello HN,

Here is a bit more explanation about erasure code and bit-rot protection.

Minio along with being S3 compatible also protects data against hardware failures and silent data corruption using erasure code and checksums. You may lose roughly half the number ((N/2)-1) of drives and still be able to recover the data.

Minio uses Reed-Solomon code to stripe objects into N/2 data and N/2 parity blocks. This means that in a 12 drive setup, an object is striped across as 6 data and 6 parity blocks. You can lose as many as 5 drives (be it parity or data) and still reconstruct the data reliably from the remaining drives.

Minio also supports bit-rot detection transparently which works along with erasure code using high speed BLAKE2 - https://github.com/minio/blake2b-simd hash based checksums.


I tried minio and was really impressed with how easy it is to get started.

But I couldn't figure out how to scale it or make it reliable. Having my entire system depend on a single server not going down is a non-starter for me. Do you consider this use case in-scope?

I've been looking at Skylable instead. While setting it up is a little clunky, it's the only open source package I've found that explicitly supports replication across multiple datacenters - which is even more interesting to me than simply across servers within a data center.


For reliability, you could start another minio server process on a different machine and use "mc mirror" subcommand (https://docs.minio.io/docs/minio-client-complete-guide#mirro...) to asynchronously copy out data. In this release, one needs to run "mc mirror" periodically using cron(8) or an equivalent tool in their platform. We are looking at providing "-c" option to "mc mirror" subcommand, which performs continuous replication. i.e, there would be no need run it periodically.

Minio encourages micro-storage architecture and scalability is achieved by deploying many minio server instances.

Hope that helps.

Disclaimer: I work at Minio.


To clarify my previous reply, in a cloud environment, scalability and multi-tenancy is achieved by spawning many instances of minio server, one per tenant. We are also working on a distributed minio server at the moment.


Cool!

The continuous async replication feature you mention would be ideal, especially if it's setup so that I can use consul/etcd/... to automatically elect a "master". If you also added (1) a feature so that writes can be redirected to another instance, and (2) a feature so that I can direct SOME of my reads to a master (reads where I care about consistency), then you'd have a not too difficulty to implement - but also very powerful - distributed version of minio. Maybe something along those lines is what you are planning.

Many instances of minio makes a lot of sense to me, and it seems light weight enough (i.e. resources like memory) that it would work. The only deal-breaker feature which looked like it was missing was the ability to set a disk quota. In multi-tenant environments it's important to have resource limits for reliability reasons. I considered putting each minio instance in it's own ZFS volume to handle disk quotas, but I would have had to write a lot of glue code to make that work from a deployment perspective.

Great work on minio - it's the least painful to use S3 server implementation I've run across so far.


Good point @devplusops. Disk quota should be built into Minio. Will add it to the roadmap.


While I commend the developers of Minio for building what appears to be a functional S3-API-compatible system in Go Lang it seems to me to be missing the key thing that makes S3 a compelling object/data storage solution -- the distributed part. Meaning, while I see that a Minio developer (y4m4b4) talks about erasure coding, something when talking about AWS/S3 normally refers to the way data is encoded and replicated across nodes to mitigate outright data loss as well as bit rot, their description states "You may lose roughly half the number of drives..." -- "drives" not "systems" or "nodes". This appears to be a single node solution, or have I overlooked the documentation describing how to join nodes together into a cluster? The description given is much more akin to RAID, which is fine and useful for distributing data across disks connected to a single system.

I hope that this is just an early announcement of a thing that is going to mature into a fully distributed solution, or that it is made clear that this is like SQLite (Minio is to AWS/S3 as SQLite is to RDBMS systems [PostgreSQL, Oracle, etc.]) -- something intended to be smaller in scope and single node only. Leaving this fuzzy will lead to many people being confused and potentially someone depending on this system and later dealing with massive data loss when their drive or drives fail.

Could the developers of Minio please make a statement as to which direction they intending on going? Is this a single node S3-API compatible solution (which is valuable for a specific class of problems) or something that will eventually be designed to store data across 10s/100s/1000s of nodes geographically distributed all working together to maintain some degree of availability and data integrity?

What's Minio going to be when it grows up? a) S3Lite b) S3


Wow this looks interesting. Nice job team minio


Thanks @unknownhad




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: