| |-- s6
| |-- cron
| | |-- finish
| | |-- run
| |-- syslog
| |-- finish
| |-- run
| |-- .s6-svscan
| |-- finish
| |-- bin
| |-- (s6 binaries)
Like I said, when I run
docker build, all these files are copied into the image as a single layer.
Using S6 to start services
init-like program is
s6-svscan – when launched, it will scan a directory for “service directories”, and launch
s6-supervise on each of those. In my example above, I’m using
/etc/s6 as my “root” s6 directory, so
syslog are “service directories.” That
.s6-svscan directory is not a service directory, that’s a directory used by
Each service directory has two files –
s6-supervise program will call your
run program, and when the
run program exits, it will call your
finish program, then start over (by calling your
run program). The
run program can be anything – a shell script, or if a program requires no arguments/setup, I can just symbolically link to it, and the same goes for the
finish program. If I don’t have any particular clean-up to do when my
run program exits, I’ll just make
finish a symlink to
When it comes to actually running a program, S6 is similar to Supervisor, Upstart, or Systemd – S6 will “hold on” to a program, instead of say, writing out
PID files like SysV Init does. So I have to make sure each of of my
run scripts launches programs in a foreground/non-daemonizing mode.
This is usually pretty easy to do – here’s my
run script for cron:
exec cron -f
run script for syslog:
exec rsyslogd -f /etc/rsyslog.conf -n
And here’s the
CMD directives from my Dockerfile:
That’s all! I’m now cruising along with S6, and running multiple processes inside a container.
Earlier, I mentioned that
.s6-svscan directory, and it’s actually pretty important. When Docker stops a container, it sends a
TERM signal to “process id 1”, which in my case is
s6-svscan gets a
TERM signal, it will send a
TERM to all the running services, then try to execute
The important thing to note: it will not try to run the
finish script in each of your service directories.
Since the container is about to be stopped (and probably destroyed), this isn’t a problem. I still like to run my ‘finish’ scripts, though, just in case I write one where I do something of importance. Here’s my
for file in /etc/s6/*/finish; do
UPDATE 2015-03-01: Laurent reached out to me and pointed out I was incorrect – when
s6-svscan gets a
TERM signal, it will:
- Send a
TERM signal to each instance of
s6-supervise (each of your monitored processes has a corresponding
s6-supervise will send a TERM signal to the monitored process, then execute your service’s
- After that,
s6-svscan will run your
s6-supervise receives that TERM signal, it runs
finish with stdin/stdout pointed to
/dev/null – meaning you won’t see any text output from those finish scripts. But they are in fact running, meaning that script above, where I manually call each
finish script is not necessary.
Laurent is going to try and come up with a solution for that, since that behavior is confusing.
Playing nice in the Docker ecosystem
In my previous article, I mentioned that I like to pick some process and call that the “key” process – if that dies, then my container should exit. I do this because most Docker containers do exactly that – they run a single process, and if that process calls it quits, the container calls it quits, too.
For example, let’s say I’m running a NodeJS program (for kicks, I’ll go with Ghost),
syslog in a container. I don’t particularly care if
syslog die – I’ll just have S6 restart the process. But if Ghost dies, I want the container to exit, and let my host machine handle alerting me and restarting it. So my
finish script for Ghost would be:
s6-svscanctl -t /etc/s6
This will instruct
s6-svscan to bring everything down and exit.
Ideas for future projects
There’s a few things I want to implement in the future.
I think S6 is capable of this, I just haven’t figured out how!
S6 has an interesting way to handle logs – if I create a directory named
log and place a
run script in it, the output of my program is piped into that
run script. There’s a
s6-log program that’s meant to be used as that piped-into program, that handles log rotation, can pipe logs into other processes, and so on.
I see a lot of images that just dump all output to stdout and let Docker handle it. I think there’s potential to come up with something better with these tools – I’m not sure what “better” is yet, but it’s something I’m going to be thinking about.
I think S6 is a really interesting, efficient alternative to Supervisor, and I especially like that I can include it on any image, even the
busybox image. I really hope you enjoyed reading this – do you have any neat ideas? Have you been working on something similar? Use the comments to let me know. Thanks so much!
If you’re interested in building on top of what I’ve created, I have a collection of images here.
Everything except the “base” image I still consider pretty volatile right now. I keep all images in their own branches (and within a folder within that branch), so the latest version of the “base” image would be at
/base in the “base-14.04” branch. You can find the base image here.
The base image is actually a bit more complicated than what I’ve written about, but it still follows the same basic structure/layout. However, I just run a few more services, and they have more complicated startup scripts.
You can find my Arch Linux image with s6 installed here.
Here are links to my Ubuntu and Arch images on the Docker registry.
Be the first to like this.
37 comments on “Docker and S6 – My New Favorite Process Supervisor”
I think s6 is what I looking for! Thank you so much!
You’ve mentioned that:
“When s6-svscan gets a TERM signal, it will send a TERM to all the running services, then try to execute .s6-svscan/finish.”
In my case the processes don’t even have time to ‘TERM’ themselves because the s6-svscan return immediately after sending TERM to all running processes, and the container just stopped.
My workaround is to add “sleep ” to the end of .s6-svscan/finish, give running processes time to gracefully shutdown themselves. But I don’t think sleeping for a fixed time is a good approach….. And I’ve tried ‘wait’, it does not work, container still stops instantly.
Do you have a suggestion on that? Thanks.
How does s6 handles environment variables passed to container?
[services.d] starting services
sshd re-exec requires execution with an absolute path
sshd exited 255
[cont-finish.d] executing container finish scripts…
The main problem is that I do not know how to write the run script to launch the SSH as it is required by s6 (without forking and with exec).
This does not work:
chmod 0755 /var/run/sshd
exec /usr/sbin/sshd -D
– s6-supervise will send a TERM signal to the monitored process, then execute your service’s finish script
– After that, s6-svscan will run your .s6-svscan/finish script.
Since I am not able to see the output of the process when it receives the TERM signal. I decided to check if the services finish script was getting executed by adding a `sleep 10` in it. So the test was, if the container takes some time to exit when it receives the TERM signal, then I could be certain that the finish script was being executed. But unfortunately I did not find this to be the case. It terminated immediately like some have mentioned in the comments.
Regarding the TERM being propagated to the processes. I decided to test it out using mariadb, since on startup mariadb clearly states if it was shutdown properly the last time. And sure enough the logs said `InnoDB: Database was not shut down normally!`
For a container, I need to launch a script before all daemon/services.
A LEMP container, for this, I need to create table, create file, create directory before launch nginx, php et mysql.
How do you this with s6 ,
sorry for my english.