Jay Taylor's notes

back to listing index

An Open Source, Self-Hosted Heroku | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: PaaS news.ycombinator.com
Clipped on: 2016-10-15

Image (Asset 1/2) alt= Hacker News new | threads | comments | show | ask | jobs | submit jaytaylor (2321) | logout
An Open Source, Self-Hosted Heroku (bitmatica.com)
436 points by stucat 2 days ago | unvote | flag | hide | past | web | 120 comments | favorite




Image (Asset 2/2) alt=

I just migrated a hobby website to Dokku, which also markets itself as a self-hosted Heroku. I was curious how Flynn compared. Fortunately, there was a blog post giving an overview[1]. I get the impression that Flynn has all the benefits of Dokku, but has the ability to scale to multiple servers. It seems like Flynn is obviously better than Dokku, so I guess I get to do that this weekend.

Keeping up with new containerization tech is starting to feel like keeping up with new JS frameworks/tooling...

[1] https://flynn.io/blog/upgrading-from-dokku-to-flynn


Yeah I really would be cautious, I just did the reverse (Flynn -> dokku), and found that even after re-deploying a few images on Flynn it wasn't cleaning up images and disk space was disappearing fast (was hitting my 20GB linode limit without having half of my apps deployed). So I ended up giving dokku a shot and that thing is lean in comparison, I was able to setup all my apps (about 7 of them) including redeploying some of them many times to get them working, and they only ended up totally around ~10GB on my server. Which left plenty of room for me to also host my music library (~6GB) there too with my music player[1].

[1] http://benkaiser.github.io/stretto/


> re-deploying a few images on Flynn it wasn't cleaning up images and disk space was disappearing fast

I'm really sorry to hear that you switched off of Flynn. We're aware of this issue, and are in the process of fixing several things that can cause it (it only happens when not using an external blobstore backend like S3). Hopefully you'll try Flynn again at some point in the future!


Yeah I did pop in on Github and report it to you guys[1] and I did really appreciate the speed in which you guys were able to respond to issues. I would like to give Flynn a shot again in the future, but I'll probably at least wait until that issue is solved first ;)

[1] https://github.com/flynn/flynn/issues/3396


only "starting"? When I chat with my dev friends it sounds exactly the same, just swap nouns here and there.

> so I guess I get to do that this weekend

Why not wait until you encounter a problem that Flynn solves and Dokku doesn't?


From the tone of OP's post, it sounds like he finds it fun or at the very least interesting. Let's not nip curiosity in the bud

Try out Flynn for sure, but from my experience with both Dokku is worlds simpler and more stable.

> Keeping up with new containerization tech is starting to feel like keeping up with new JS frameworks/tooling...

Well, if what you have currently is working fine, you don't need to change anything. You don't actually have to migrate your hobby website to flynn.


I think a lot of us feel your same pain.

Has anyone had success with sticking to Dokku for small side projects on DigitalOcean/Linode/etc?


I can't sing the praises of Dokku enough these days. I'm a developer who ran a constantly with a few partners for several years and now I have a day job. I used to maintain about 8-10 VPS scattered all over, with Puppet, but when I took the full time job I moved some clients off and the rest to a single dedicated server, still configured with Puppet and kind of let it sit for awhile.

This wasn't a good place to be so I decided to clean it up and instead of running with Puppet again, I decided to check out PaaS solutions and I couldn't be more pleased with Dokku, I'm currently running JIRA, Confluence, Rails, Node.js, Go, PHP and several static (with middleman/jekyll/etc) sites on a modest VPS at Digital Ocean and I love that I don't have to think about overhead anymore when spinning up new projects. Just `dokku apps:create` and `git push` and it's done. The killer features for the migration for me have been the storage feature for specifying mounts for apps that aren't 12f friendly and the biggest one of all is the Letsencrypt plugin. It's so easy to setup SSL/TLS for new apps that it just feels wrong.

Dokku is amazing, and I'm looking forward to sending some funding to the maintainers.

Edit: forgot to mention in addition to all those apps, I'm also running the necessary backing services, MySQL, Postgres, Redis, and Elasticsearch and Kibana.


I was on dokku pretty much from the start but my side projects grew to a point where I needed a little more control so I switched to docker with compose files. Dokku is really nice though - they did a great job making deploying as easy as possible.

Can you share any more details on your deploy process? Do you have a production compose file that you use with images pushed to a registry or something similar?

Nothing fancy, I don't deploy often so I just ssh in, pull down the latest code and do a `docker-compose -d -f docker-compose.prod.yml restart` or something like that.

I currently have 16 apps deployed via Dokku on my 8GB Linode and am pretty happy with it. I have a wide variety of apps installed: Node, Django, Rails, and Elixir (Phoenix).

I'd also back dokku for small side projects. I have 4 small slack bots installed on my reserved t2micro instance and I'm pretty happy with it.

Ah, if only people backed Dokku monetarily[1]. I'd probably get my ass in gear and release the multi-server functionality I prototyped over the summer.

Nice to see that others are finding Flynn useful though, there is plenty of space for PaaS offerings, and I truly hope they are financially successful.

[1] My experience in this area is that it's hard to get peple to pay for something that is free, which dis-incentivizes work/releases. I'm not surprised that many use the free version of Flynn, or that there were a ton of Heroku users that left once they stopped getting free resources 24/7. Dokku does take donations - https://opencollective.com/dokku - but to be quite honest, it's almost certainly nowhere near even 1% the amount of money we've saved our users.


As an aside, you might consider a tool similar to the docker tooling that can help manage multiple dokku hosts on different cloud platforms... People seem to be more inclined to pay for things like that.

Jokes on us, Heroku side projects cost me like a grand since you guys launched :/

People seem more willing to pay for features upfront, so a Kickstarter might be a better way to fund the development of the multi-server functionality. I am yet to use Dokku for anything but I'd contribute to that Kickstarter. (Though I have to say that when I evaluated Dokku I loved how simple it was compared to the alternatives, so if Dokku gets multi-server support I hope it isn't at the price of its simplicity.)

I just want to say, that while I haven't financially contributed, that I do appreciate the work that has gone into dokku, and it's a great platform for self-hosting smaller projects on a single vm.

I've had success going from heroku to Dokku on DO. Pricing is pretty much the same and Heroku has mature features so I may switch back at some point.

Are you saying Heroku costs are the same as Dokku on DO? How so?

My app is very small. Typical charges on a $7 hobby dyno were $3-$6 per month. DO is $5 for the droplet.

They're two completely different things. I don't know why people think they're the same by any means.

What are two totally different things? Dokku on DO is pretty comparable to Heroku (minus add-ons).

I'm confused, please explain.

This is a great overview of the kubernetes/self-hosted PaaS landscape, but is also clearly a bit of marketing for Flynn. Flynn looks awesome, but also as/more awesome at a glance from my research and test runs is https://github.com/convox/rack which I got closer to working more quickly than Flynn, though it helped that Convox builds on top of docker-compose.yml files, which we were already using for development.

For better or worse, we're going the full kubernetes route though, and still working out what deployment etc looks like. So far a lot of un-DRY looking YAML files have inspired a spike at a tool to help manage creating and maintaining kubernetes yaml[1], which others reading this are welcome to try and I'd love to hear better alternatives to or get feedback/pull requests/etc. I may push up a branch I'm working on later that partially automates creating deployments from a Procfile, but we discovered having seperate Procfiles may make applying env-variable only changes harder, curious to hear if others have any guidelines on that, it's so easy to edit an ENV var on Heroku we took it for granted.

[1] https://github.com/stellaservice/ky


I'd love to hear about any trouble you ran into while trying Flynn so that we can fix it! Feel free to send me an email: jonathan@flynn.io

Convox is a great pick if you want to use AWS-specific services for your entire stack. Instead of using portable open source components, Convox uses AWS services wherever possible, and acts as a lightweight coordinator to combine them into a platform.

Flynn has no external dependencies on cloud features, so you can run it anywhere, whether that's on your laptop, AWS, Google Cloud, a VM somewhere, or bare metal in a colo or private datacenter. We also include RDS-like highly available database appliances so that your whole stack is portable and you are not locked into a single hosting provider.

edited to add: We didn't know about this post until Bitmatica posted it, this is just a great post from a happy user, not planned Flynn marketing!


Yeah, definitely think you guys are both top notch projects, but the article focused on you guys as the end solution and mentioned so many other things en route, I felt people should see one of the other high profile contenders, though forgot to add it was AWS only (added below). It's been a few months now since I kind of timeboxed an hour or two to play with each of 3 I narrowed down to as most likely to offer the easiest transition off of Heroku without being single node like Dokku (you guys, Convox and Tsuru were the 3 I think I tried, and got furthest with and was most impressed with Convox and Flynn). It wasn't a totally fair comparison, as I think I ran Flynn and Tsuru locally via Vagrant, because I could for both I think, but had to actually use real AWS for Convox... but this was all just because I was hoping to make a case at work that we should try one of these actual PaaS-like solutions rather than go for a non-PaaS solution like Kubernetes, as I knew it would not be a trivial transition, as the articles' author discovered. If I ever have time to try it again I'll try and go a little further and provide feedback if I have any!

Awesome, thanks for the feedback!

From looking at your KY tool, it sounds like the pain point you're having is that there's a lot of boilerplate repetition around loading configmaps and secrets into your containers, is that correct?

If that's not far off the mark, have you looked at using an object-oriented k8s client? I've been playing with a pattern where I create classes which know how to render themselves into k8s API objects, and that gives the ability to use language-level looping to create, for example, lists of env variable definitions in the resultant spec.

I'm writing python apps so PyKube[1] is the base client that I've used, but there are clients in lots of languages at this point.

I also tried out Helm, which looks like it has a way of generating k8s yaml files using Go templates, and given a set of config parameters[2]. Unfortunately my use case was a bit too dynamic to fit in that mould, but this seems like it could be useful for many things.

[1]: https://github.com/kelproject/pykube/ [2]: https://github.com/kubernetes/helm/blob/master/docs/examples...


+1 for convox (https://convox.com/) we've been using it successfully for a number of our clients - it's easy to setup and run.

Yeah I initially linked the .com then thought people might appreciate seeing the product, rack and its open source repo more :-) Weirdly when I set it up, it looked like with only a small change or two to our existing docker-compose.yml it "just worked", but the funny thing was once it was running I couldn't quickly manage to find the web service exposed anywhere to actually test... but it deployed painlessly and showed up running in several ec2 instances and a load balancer in our AWS console! Oh, that't another point, unlike Flynn, Convox is AWS only, so it's more opinionated/less flexible, but for many people that's not really a downside if you only plan to run it on AWS for now anyway. It's also built by a bunch of former Heroku engineers, including David Dollar (https://github.com/ddollar)

+1 for convox - takes the headaches out of AWS application orchestration best practices.

We've been using Convox too in Production after some time on Heroku and later AWS (Beanstalk and and CodeDeploy + other services) and I must say it is working as a charm for us. For us it's the right balance of control, ownership, ease of deployment and management. Plus, Convox have proved to provide a great service when we needed any extra support. I can't comment about Flynn though.

As someone who's been using flynn in production for a month or two, some things to be aware of. (Devops is not one of my strongpoints, so keep that in mind)

You want to use an external blob-store, by default it uses its own postgres to store them. However, it also doesn't do any garbage collection, so we ran out of HD space on our cluster, causing postgres to go into read-only mode. This prevented us from pushing updates / changing any app-config settings ( ENV, Scaling, etc ). Thankfully we were hosting our own DB off-cluster or that too would have been read only.

The default limits on memory is 1gb, and for our app, when it runs out, it just crashes. Leaving no obvious error messages to be found.

Obviously RTFM as well, (especially this part https://flynn.io/docs/production) we had some other issues that could have easily been prevented.

Huge shout-out to the team that hangs out in their IRC channel, they were a massive help in solving the problems we've came across.

Using it when it works is lovely, we switched off of heroku recently and its been a very similar experience.


> You want to use an external blob-store, by default it uses its own postgres to store them. However, it also doesn't do any garbage collection

The lack of garbage collection by default will be fixed in next week's stable release, we're really sorry that it caused you pain!

> The default limits on memory is 1gb, and for our app, when it runs out, it just crashes. Leaving no obvious error messages to be found.

Indeed, we're working on documenting this and exposing out-of-memory events in the app logs so that this is obvious.


Good luck with the out-of-memory handling. From personal experience, that's easier said than done.

Here's the issue tracking this: https://github.com/flynn/flynn/issues/3375

> You want to use an external blob-store, by default it uses its own postgres to store them. However, it also doesn't do any garbage collection, so we ran out of HD space on our cluster, causing postgres to go into read-only mode. This prevented us from pushing updates / changing any app-config settings ( ENV, Scaling, etc ). Thankfully we were hosting our own DB off-cluster or that too would have been read only.

By garbage collection are you referring to the binary blobs for built slugs?


Correct. By default app slugs are stored as large objects in the built-in Postgres cluster, to allow deployment without external dependencies. For production we recommend moving to AWS S3, Google Cloud Storage, or Azure Storage.

Garbage collection of old slugs is supported though it isn't on by default (it will be in the next stable release).


"However, a primary goal of our solution was to automate the process of going from code in a repository to a deployed application in our cluster — much like Heroku. Kubernetes did not appear have any mechanism to support this type of workflow, nor a handful of other features we had hoped to find."

Just in case someone is feeling the same, deis[1] is providing exactly this. I did not try their software but I've read a lot of positive about them in the last time.

[1] https://deis.com/


I've been using deis in production for ~4 months. It's been great! If I can fit something into a buildpack it's fast to iterate, for more complex concerns you've got the kubernetes backing it up.

I run flynn as a local staging/CI service in our networking closet. It allows us to rapidly try out new apps/services in an environment that approaches production on a couple of machines we had lying around super quickly.

There are still a couple of rough edges, specifically around upgrading and persistence across power failures, which is unlikely in a datacenter, but aside from those things, all in all excellent.

In particular, though, the flynn.io team are particularly excellent in terms of support.


> In particular, though, the flynn.io team are particularly excellent in terms of support.

Seconding this. I've had many (like 10 or 20) very positive experiences with them, where I went from "OMG WORLD IS ON FIRE" to "oh, sweet, I just delete this row in postgres?"


Thanks for using Flynn!

> There are still a couple of rough edges, specifically around upgrading and persistence across power failure

Indeed, we have been finding and fixing bugs in these two areas lately, and things have been improving. Next week's stable release will have a bunch of fixes for issues we found. We also recently added a bunch of new test coverage for backup/restore, which should ensure that future upgrades go smoothly.

As always, if you see any issues please let us know so that we can fix them!


We at Globo.com[1] are using docker in production before it was considered ready for prod, with tsuru PaaS, as some features in tsuru(like self-healing) made docker easy to handle, manage and scale containers. Tsuru has more than 4 years, it's really stable and reliable, we are attending huge traffics in a large number of apps and it has lots of important features like multi cluster(pool), metrics, auto scale, granular in relation to permissions of users, etc. We are investing a lot in this project as anyone can see in our repo[2] and we changed completely the way devs, (dev/)ops and product people worked here. Tsuru was already easy to install(all in one VM[3,4]), and now we have an experimental way[5] to make it in all IaaSs supported by docker machine.

[1] Globo.com is the internet arm of Globo Group, the biggest broadcast television in Brazil and second in the world [2] https://github.com/tsuru/tsuru [3] https://github.com/tsuru/tsuru-bootstrap [4] https://github.com/tsuru/now [5] https://docs.tsuru.io/stable/experimental/installer.html


Since there are some Flynnsters hereabouts: do you run buildpacks unmodified? How do you deal with fully disconnected environments?

I work on Cloud Foundry buildpacks, several of which are downstream from Heroku's. So I'm professionally interested in seeing another perspective on how to drive them.


Yeah, we ship unmodified Heroku buildpacks, plus one Cloud Foundry buildpack (https://github.com/cloudfoundry/staticfile-buildpack).

So far no one is using Flynn in a fully disconnected environment, and we're aware of this limitation. My take on this is that we need a replacement for buildpacks that allows for repeatable/functional builds (think Nix for app deployment).


I see, that definitely makes it easier. The Heroku code (quite reasonably) assumes 100% connectivity.

One gotcha: if you are snapshotting images after building, your images are not truly reproducible. Heroku occasionally swap out the binaries. This surprised once or twice. This is one reason we wound up building all our own runtime binaries[1].

When you get to disconnected environments, look us up. We built a whole tooling, compile-extensions[2], to make it pretty close to seamless. We intercept URLs and transform their either into our URLs or local copies of the files.

(edit: actually, you could just switch to the Cloud Foundry buildpacks, since they have an identical detect/compile/release cycle)

I agree that buildpacks are not the super long term solution. One area being explored by engineers at Pivotal and IBM is to give a buildpack-like experience to OCI layers[3]. I'm sure they'd be happy to work with you on this as well.

[1] https://github.com/cloudfoundry/binary-builder [2] https://github.com/cloudfoundry/compile-extensions [3] https://www.youtube.com/watch?v=DSTT0przx4g&list=PLhuMOCWn4P...


I don't compare Flynn with Heroku or Dokku. I compare it with Kubernetes.

When it comes to dev UX, I'd agree that Flynn's is better, though both are easy to use. But I'd point to Convox for great dev UX in something that deploys only to AWS, and that can use a dockerfile/binary artifact instead of git push (I've been a minor contributor to Dokku and Convox, using both in prod on a few different projects).

I mention k8s because "self-hosted" is mentioned in the title.

Personally, the main reason I've been doing personal projects in k8s over the past 6 months or so is that 1) it offers out-of-the-box container networking and dns-based discovery, and my side projects are all distributed software so it makes it easy to get nodes talking, and 2) minikube runs a cluster where I can actually scale my apps up and down and test out distributed software on my laptop similarly to how it will be deployed.

(I'm investigating using Weave's ECS AMI, so that I can also get container discovery and networking in Convox, because I love the Convox dev UX so very much. But I'm also thinking, more and more, about how I can automate a smoother dev UX for my k8s projects, because I also like being able to easily run a real cluster locally in minikube, and Convox can run locally but with only one instance of each app).


I'm curious about any compelling reasons I should use something like Dokku or Flynn if I already prefer to manage my own infra with something like Ansible or Salt. Serious question, not a sarcastic rhetorical remark about this type of tool :)

I have a really extensive Ansible playbook for setting up my stack; nginx with extra modules, postgres, redis, and others like fail2ban. It all targets CentOS and also configures SELinux when appropriate. It all works well, until I want to do something that doesn't fit the mold that I've put myself in.

For example, I wanted to get off Google Analytics and try out Piwik, which uses PHP and requires MySQL, Postgres isn't supported. It would have been really difficult to write out a bunch of Ansible config for PHP and MySQL, both technologies I'm not familiar with and don't know their best practices. So, I'm still using Google analytics. A similar story happened when I wanted to host a Tor relay.

The allure of just pointing Dokku/Flynn to a pre-made Dockerfile for Piwik/Tor/ownCloud/etc. is pretty compelling. It let's me mix and match technologies without feeling like I'm polluting my server with a bunch of random services and tech. Containerization provides just enough separation to make me happy.

I'm not sure if Dokku or Flynn is the golden goose I'm after, but it seems like a step in the right direction.

I still use Ansible to do basic provisioning like users and SSH keys, setting up Fail2ban, and installing Dokku.


Hearing your workflow/tooling is handy and I may borrow an idea or two while playing around, thanks for writing this up :)

Great question!

We designed Flynn to solve a bunch of the common problems around running apps in production.

This means that we're working full-time just on the automation and stability of your production environment. As you grow and scale, Flynn will scale with you and provide integrated features without any additional effort.

Here's an incomplete list of things that Flynn provides out of the box:

- Automatic high availability. Everything in Flynn is highly available, as long as you are running three or more servers. Additionally, it's designed to fail gracefully, so even if there's a network partition or some hosts fail, everything will keep working if at all possible.

- git push deployment, after installing Flynn you can run `flynn create && git push flynn master` and your app will be automatically built and deployed.

- Polyglot apps. Want to deploy a Phoenix app written in Elixir? You can do that just by specifying the appropriate buildpack. Everything else works exactly the same.

- Zero-downtime deployment with release management and easy rollbacks. If a new version or configuration variable causes the app to crash at boot, Flynn will automatically detect this and stop the deploy before any traffic hits it. Deploying new versions of the application does not cause user-visible downtime.

- Easy app configuration. `flynn env set FOO=bar` rolls out a new immutable release with the configuration change. You don't have to edit any files, and you can roll it back later with `flynn release rollback`.

- Built-in HA databases. `flynn resource add postgres` configures your app with a PostgreSQL database in a cluster that is highly available and already configured with replication and safe automatic failovers. You can do the same to get a MySQL or MongoDB database.

- Automatic load balancing and TLS. Flynn automatically load balances HTTP and TCP traffic, and supports HTTP/2 out of the box. You can add a TLS certificate with a single command, and all changes to the routing configuration happen immediately without restarts or downtime. Want to send a subpath to a different app? You can do that in a single command.

I could go on and on like this, but you probably get the idea. Of course, it's probably possible to build every feature of Flynn that you want using your configuration management tool of choice, but that would be a huge amount of time not spent developing the actual applications that you want to deploy with it.


I work on Cloud Foundry, a competing system. Flynn is unarguably ahead in certain features (and vice versa, in my biased opinion).

I agree with the thrust of your comment here: rolling your own PaaS is hard. Just plain old hard. There's so much, so damn much that you wind up having to do.

Before I worked on CF I worked in Pivotal Labs. I got to see various custom home-grown PaaSes. Some were brilliant. Some were terrible.

Every single one of them was a millstone.

A lot of people don't realise it yet, but the Linux of our time has been written -- in the sense that nobody writes an operating system if they're not in the OS business.

I don't know if it's going to be Cloud Foundry, or Flynn, or OpenShift or some mix of these, but we have already passed the point at which it makes rational engineering sense for the 99% of engineers to build their own.

Disclosure: I work for Pivotal, the majority donor of engineering effort to Cloud Foundry.


Makes sense to me! Thanks for taking the time to write this reply - I'm going to have to play around with Flynn

Probably user experience. If you've nailed that with your own infra and made it easy to manage the lifecycle of services for your developers, then there isn't really any need to "reinvent the wheel". Infrastructure doesn't stop working just because there isn't any docker in it.

I looked into all the "Self-Hosted Heroku's" a few few months ago. One feature I found they all lacked is multitenancy, meaning that there is no security model in place to trust that you could host multiple pieces of code from different clients without them hacking each other.

I'm not talking about the deployment specifically, but rather isolating the code once it is deployed.

Am I missing something there?


You aren't missing something here, it's necessary for reasons best explained -- by way of slight analogy -- by Sir Humphrey Appleby:

> Jim Hacker: People can wait in the lobby. Or in the state rooms.

> Sir Humphrey Appleby: Some people. But some people must wait where other people cannot see the people who are waiting. And people who arrive before other people must wait where they cannot see the people who arrive after them being admitted before them. And people who come in from outside must wait where they cannot see the people from inside coming in to tell you what the people from outside have come to see you about. And people who arrive when you are with people they are not supposed to know you have seen must wait somewhere until the people who are not supposed to have seen you have seen you.

This is one of the priority engineering efforts for Cloud Foundry at the moment. People want it.

Disclosure: I work for Pivotal, the majority donor of engineering to Cloud Foundry. I guess that makes us competitors to Flynn.


Flynn's upcoming User/ACL model should cover multi-tenancy AFAIK.

Yeah, our security roadmap will get us to multi-tenancy eventually.

Due to the security posture of the Linux kernel, we won't recommend running untrusted code side-by-side on the same hosts as more sensitive workloads, but we plan to harden everything to the maximum extent possible.


  { tldr : 'https://flynn.io' }

Looks like this is great project.

We are using powerful dedicated server to host multiple sites/apps like the way shared hosting providers do. Maintenance and upgrade is so painful, I am trying to move to something else.

I looked into Docker but seems things are still not stable. Another approach I am looking into is KVM virtualization.

Aactually, all I want is something like Heroku (but self hosted or German providers cause our clients are German and that's important) where we can host each app on it's own container.

So far, this meets all the requirements.


That's precisely what we wanted, and we tried a lot of different options. Flynn was the answer, definitely worth a try.

Facing a similar situation a couple years ago, the solution I had pushed for was just for a few (3) dokku servers setup behind an nginx server as a reverse-proxy... the reason for multiple servers in our case was redundancy first, this split allowed us to do the exact same deploy to three servers, run tests, update nginx config then down the old version(s) of the apps.

Flynn and deis definitely seem interesting, as does the tooling that coreos and docker themselves are working on.


Take a look at dcos.io (Mesosphere's Data Center Operating System). DC/OS provides a lot of the missing features you're after:

- Persistant workloads - Fine grained ACL's for RBAC

They're also working on shipping a consolidated logging and metrics API in Winter of 2017 which will enabled users to get workload plus host-level logs and metrics into almost any log and metrics aggregation solution (in your case, ELK would be easy to ship to).

Best of all it runs on top of production proven scheduling software, Apache Mesos, which has wide community adoption and support.


I would love to have something like AWS Lambda that can be self hosted on something like this or Flynn.

I know a lot of enterprises that are interested in using serverless arch (the framework serverless[0], not the literal definition of not using servers) but don't want to use AWS or other public cloud providers.

[0]: https://serverless.com


This would be great!

The open source Parse server (https://github.com/ParsePlatform/parse-server) already works on Flynn, and we can't wait until someone implements an open source system like Lambda.


Me too! We've been considering migrating off Lambda native to serverless or Chalice[1], but I keep thinking kubernetes is perfect for self-hosting this (or could be? Maybe?). I have one thing bookmarked in this direction, funcktion [2], and another that looks less... mature, open lambda [3] but would love to hear thoughts/other options if anyone has tried any of these.

[1] https://github.com/awslabs/chalice [2] https://github.com/fabric8io/funktion [3] http://www.open-lambda.org/


Is there a way to dynamically auto-scale hosts? One of my app has very bursty traffic, and would require scaling from 2 to 10 host servers and then back down over a period of several hours. I can't see this referenced in the docs? I can see you can scale processes but that's not increasing host resources available, right?

Flynn itself doesn't communicate with infrastructure APIs currently, but you could hook this up so that there is a base set of three servers that are always running and then an autoscaling group that watched your metrics or a schedule and added/removed servers.

I'd be happy to explain this more on IRC if you're interested (#flynn on Freenode).


Thanks for the reply. Is this something that you have planned in the future?

This aspect is a really big part of Heroku/AppEngine/Elastic Beanstalk and saves me a tonne of money each month due to the auto-scaling that we get (we're on AppEngine). Is there a reason projects like Flynn have not tackled it? Or is it just a question of time?


Yeah, it's definitely something we want to support in the future. Autoscaling requires building some components that are aware of and can communicate the underlying infrastructure APIs (AWS, GCP, Azure, DigitalOcean, OpenStack, etc.) combined with app/host metrics. It's just a matter of putting the implementation effort. We'll get there eventually.

Great stuff - thanks - good luck!

> Is there a reason projects like Flynn have not tackled it? Or is it just a question of time?

Like most useful production software, coming up with industrial-grade implementations is harder than it looks. The tricky part is picking the right basic metric.

CPU load is often a bit misleading.


maybe machine learning is useful here?

Probably overkill when the metrics are so easy to gather.

I think old-school SPC techniques will take us a lot of the way.

And even before that, asking people what they already watch is good too.


I have been using Dokku for a year for personal side projects and it has been awesome. Each release brings good features, fixes bugs and in general improves it.

Setting up could never be easier (read https://glebbahmutov.com/blog/running-multiple-applications-...) and the large number of plugins for databases is a huge plus.


Why not just use Heroku and actually build your product?

Sometimes I have hobby projects that need more than the free tier of Heroku. I get a lot more bang for buck somewhere like Vultr or Digital Ocean compared to Heroku.

The standard tier at Heroku is 512MB RAM for $25. Over at Vultr, I can get 2GB RAM for $20. I also get 45GB of SSD, so I don't have to pay for something like Amazon S3 to store uploaded files.

If you're building an actual product, I agree with you, pay Heroku and focus on building your product. However, there's a lot of people looking to host smaller projects that wouldn't break even financially if they used Heroku.


Since ~mid 2015, the Hobby tier for Heroku has been $7 / month, and since last month, includes SSL.

...which is for hobbies, not businesses.

If Heroku works for you, that's fine, but for lots of people it doesn't. For example, Heroku only supports HTTP, not TCP. There are a number of technical limitations Heroku places on apps that other platforms like Flynn don't.

There are lots of reasons why users need a different, or especially an open source, PaaS.

Some users run into scaling problems when their products grow beyond a certain point. Others want to have more control over their infrastructure for compliance, governance, or other administrative reasons. Others have huge deployments and want to save money by using their own cloud accounts.

It varies from customer to customer, but for many Heroku isn't an option or isn't the best option.


Totally with you on this - all pretty valid points; but, I'd say Heroku does still work for a lot of people, especially if they can get past the cost (which, imho is worth it from a focus and time point of view).

I used Heroku to get my project off the ground, but costs are adding up now. What holds me back from migrating off Heroku is I don't have time to learn devops properly, kubernetes, maintain servers etc.

This looks like a nice middle ground for me. I should be able to migrate to flynn on Digital Ocean pretty easily (easy deployment of flynn infrastructure, same buildpack system). I'm sure there'll be regular maintenance involved, but it looks like nowhere near as much as a roll your own kubernetes cluster.


Down votes, but avoiding answering the question, classy!

You should check cloudfoundry -> https://cloudfoundry.org

It's opensource, multi-tenant and widely tested (used) by big companies.


How would Flynn compare to Cisco Mantl https://blogs.cisco.com/cloud/mantl-version-1-2-released-kub...? Mantl is Open Source, supports many Cloud platforms using Terraform including Openstack & virtualBox, schedules with Kubernetes & Mesos, etc, etc: all the buzz-words included :-)

BTW Met the Flynn team at 32C3, nice guys indeed.


Anyone have experience to how this compares with Deis (Workflow)? Last week I spent an evening getting a small cluster set up with kubernetes and deis. It looks to work very similar to Flynn.

When we started in 2013, the only open source scheduler available was Mesos and the ecosystem didn't have community efforts like Kubernetes, so we had to write our own components to build Flynn.

Flynn is designed to be an end-to-end solution for production deployment, and all of our components are created to work together. The whole system is self-bootstrapping and self-hosting, so installation is easy, and the same APIs are used to manage the whole platform as are used to manage apps deployed on it.

In addition to the twelve-factor stateless webapps that Deis Workflow supports, Flynn also includes highly available database appliances with safe, automatic failover (currently PostgreSQL, MySQL, and MongoDB with more coming in the future). We also have a bunch of security features coming over the next few months like Let's Encrypt support and flexible user authentication with 2FA and very granular access control.

If you don't need or want our database appliances and you are comfortable with Kubernetes and happy to install and operate it, then Deis Workflow is a good option. If you don't care about using Kubernetes specifically, Flynn is a good pick as it is easier to get up and running with.


> When we started in 2013, the only open source scheduler available was Mesos and the ecosystem didn't have community efforts like Kubernetes, so we had to write our own components to build Flynn.

I've taken to referring to this as "Not Invented Yet Syndrome". It's hard to base an architecture on a subsystem that doesn't exist.


Is there any HIPAA story for Flynn?

You can do HIPAA on AWS yourself or use (expensive vendors). However both of these routes lose you the management/deployment abilities of things like Heroku/Flynn.


It seems to me, the security model needs to be similar to using EC2 itself... you probably shouldn't run apps for multiple clients on the same host instance, but you can run multiple services for the same client, and provision version upgrades on a given client.

The docker security sandbox has had a few issues, and probably should not run sensitive data for multiple partners on the same ec2 instance. ymmv though.


There are healthcare companies that already use Flynn today, though not for HIPAA compliance specifically.

Compliance is a really interesting vertical. As we make progress on our security roadmap, Flynn will become a very compelling option for environments like HIPAA, PCI, etc. especially when combined with clouds like AWS that are also compliant.


Did anyone checkout https://convox.com/ - which is a more direct alternative to Heroku?

I read some of the convox docs a while ago, and the idea I got is that it is an alternative, open heroku implementation which is nice but you're still locked to Amazon AWS (and its prices)

https://www.quora.com/What-are-some-open-source-Heroku-alter... has some more answers (I just added an answer). As of today, the list (in no particular order) is looking like: Convox, Flynn, Dies, Dokku, Tsuru, Apache Stratos, cloudify-cos, Openshift, etc

I too have been on a journey to create a heroku for myself. I finalised my system on using rancher + drone ci combo.

You can watch a video of my setup here: https://www.youtube.com/watch?v=zxPB2v-tWvQ&list=PLjQo0sojbb...


Anyone using Erlang/Elixir on Flynn? Will distributed message sending between deployed apps work?

Yeah, check out this blog post about how to deploy Phoenix: http://nsomar.com/how-to-use-flynn-to-deploy-a-phenix-app/

Distributed communication should work fine, feel free to ping us on IRC or GitHub if you run into any trouble or have any questions!


It isn't clear from the intro docs, but what OS's does Flynn run on?

The install guide mentions Ubuntu. Does it work on FreeBSD? Does it work on other Linux distributions from the Redhat family (fedora, centos, rhel, etc)


Currently Flynn clusters must be running a Debian based distro. Out of the box it works on Ubuntu 14.04 but you can get it to run easily on Debian or Ubuntu 16.04. It likely also works on RHEL based distributions.

The CLI can be installed on any Linux distribution, FreeBSD, Windows and OSX.

Disclaimer: I work on Flynn.


Curious if the folks looking for an "open source Heroku", whow found Flynn, also looked at open source Cloud Foundry... and what turned them off from CF.

I've been a part of running Cloud Foundry in production, both on-prem and in AWS. From a user (developer) perspective, it's pretty magical: throw an app at it, and it runs. On the infrastructure side, the footprint is huge and has a ton of moving parts. If you have an SRE-type team to keep it alive, and your organization is cool with major multi-tenancy, I'd say that it's good enough for now. If it's dev-run, or if you want tenant scoping (e.g. one CF instance per business unit, or other isolated kind of microservices) I would absolutely not recommend it.

They are working on more lightweight versions. CF is pretty "enterprise," with the good and the bad that entails.


The pivotal shilling on hacker news is one reason.

Anyone have experience with DC/OS? It runs on Mesos, not Kubernetes, but in any case it seems very full featured.

https://dcos.io/


one more PaaS solution I found today - https://hasura.io/ built on Kubernetes and Docker. Thought this has some relevance here

no autoscaling.
tofupup 21 hours ago [dupe] [dead] [-]

neat


TLDR: We tried getting something similar to Heroku up and running but we ended up going with a paid service that starts at 4299$/month.

Hey, I'm a co-founder of Flynn.

Flynn is entirely open source and BSD-licensed (https://github.com/flynn/flynn). You can run Flynn on any infrastructure without paying us anything.

The price you reference is for our Managed Flynn product, where we act as your ops team, operating a Flynn cluster for you and providing hands-on support for your apps and databases in production.


Do you have a comparison chart or something lining Flynn up against Azure/Heroku/Google App Engine somewhere?

I know it's probably not as feature-complete as those other PaaS's, but it would be great to get it at a glance.


The space is too complex to do a chart, as a bunch of details are not directly comparable.

Many people who choose Flynn are not directly comparing us to hosted platforms, as they want more control over their infrastructure (where they run it, lock-in avoidance, etc).

As far as individual features go, we are closest to Heroku (and maintain buildpack/runtime compatibility). However there are differences including our support for other databases (MySQL and MongoDB in addition to Postgres and Redis) and many other smaller things like HTTP/2 support.


That's for Managed Flynn. We're using the open source version, which is free:

https://github.com/flynn/flynn


And nothing for the support offered by the flynn team?

I am also curious to find out what is the running cost for maintaining flynn. Is there one dedicated engineer minitoring the infra?


Yep, the Flynn team helped us out via IRC and were fantastic. Especially as we were still working on getting pre-1.0 running, it was great to work with an engineer directly through the bumps. Obviously they do offer paid support as well, which is great for recurring issues and operational assistance.

We do have a dedicated engineer managing / monitoring our infrastructure. It's not a set-it-and-forget-it kind of thing, so if you don't have somebody to manage your infrastructure, the hosted version might be a better choice.

The big cost savings win was for all of our /other/ engineers who now don't need to know anything about ops to get their apps running. Previously with Chef, everybody was responsible for writing recipes and setting up environments, but now we have a standard set of buildpacks that work with our apps, and when we need to transfer an app to a client (as we're a dev agency), we can set it up on Heroku and it should "just work".


Our experience at Flynn is that there are a few common scenarios:

- Small and side projects run well without supervision.

- Existing ops teams can manage Flynn without much additional effort.

- Medium sized teams that don't want an internal ops team pay us for Managed Flynn where we take over all ops-related responsibilities.


For a small-mid sized company with an overworked dev manager, hiring out flynn for the cost of a Jr Systems Administrator sounds pretty reasonable.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: