Jay Taylor's notes

back to listing index

Clustering CoreOS with Vagrant

[web search]
Original source (coreos.com)
Tags: coreos vagrant coreos.com
Clipped on: 2014-09-23

Clustering CoreOS with Vagrant

April 24, 2014 ยท By Brandon Philips

When you're first exposed to a distributed platform like CoreOS, running a development environment that is complex enough to match production seems like an impossible task. This article is going to show you how convienient it is run a small CoreOS cluster on your local machine with Vagrant that will mirror the way your production machines are set up.

CoreOS's implementation of cloud-init, coreos-cloudinit, makes it super easy to bootstrap a cluster of CoreOS machines by providing a mechanism for initial machine configuration. The same file can be used to bootstrap CoreOS on Vagrant, Amazon EC2, Google Compute Engine (GCE), and more, making it easy to share the same configuration locally and in production.

This article will focus on Vagrant for a fast, simple deploy on your local machine.

Getting Set Up

Before we get started make sure that you have recent versions of Virtualbox, Vagrant and git installed on your machine.

Now that your local machine is set up for Vagrant, let's grab the CoreOS Vagrant configuration and customize the cloud-config "user-data" file. On OpenStack, EC2 or GCE this would be the same user-data you can provide when starting a virtual machine.

$ git clone https://github.com/coreos/coreos-vagrant
$ cd coreos-vagrant
$ cp user-data.sample user-data

Configure User-Data

Now, generate a new etcd discovery URL. etcd uses this URL to bootstrap your cluster and wire the new machines together. You can read the details on how this works by visiting https://discovery.etcd.io

$ curl https://discovery.etcd.io/new

After running this command you will get out a new private discovery URL that looks something like https://discovery.etcd.io/<token>. Take this token and put it in the relevant section of the user-data file using your favorite text editor. When you are done the top of the user-data file should look like this:

#cloud-config

coreos:
  etcd:
    discovery: https://discovery.etcd.io/<token>

Start the Cluster

By default, the CoreOS Vagrantfile will start a single machine. To start a cluster, we need to copy and modify the config.rb.sample file:

cp config.rb.sample config.rb

Uncomment the line containing num_instances and change it to 3 or more. Now we're ready to start the VMs:

$ vagrant up
Bringing machine 'core-01' up with 'virtualbox' provider...
Bringing machine 'core-02' up with 'virtualbox' provider...
Bringing machine 'core-03' up with 'virtualbox' provider...
==> core-01: Box 'coreos-alpha' could not be found. Attempting to find and install...
    core-01: Box Provider: virtualbox
    core-01: Box Version: >= 0
==> core-01: Adding box 'coreos-alpha' (v0) for provider: virtualbox
    core-01: Downloading: http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_vagrant.box
    core-01: Progress: 46% (Rate: 6105k/s, Estimated time remaining: 0:00:16)

Vagrant will now download the latest CoreOS image and configure the VMs. You can ignore the warning about guest additions, everything will work just fine.

After the vagrant command returns it is time to ssh into one of your instances and try out a few commands. The only trick is that we will want to add the Vagrant key to our ssh-agent using ssh-add. This allows us to forward our SSH session to other machines in the cluster. You can do this with any key, such as those added via cloud-config, but our vagrant machines will already have the corresponding public key on disk.

$ ssh-add ~/.vagrant.d/insecure_private_key
Identity added: /Users/CoreOS/.vagrant.d/insecure_private_key (/Users/CoreOS/.vagrant.d/insecure_private_key)
$ vagrant ssh core-01 -- -A

Testing Out the Cluster

Now lets make sure that all of our machines came up and are set up with fleet.

$ fleetctl list-machines
MACHINE   IP            METADATA
517d1c7d... 172.17.8.101  -
cb35b356... 172.17.8.103  -
17040743... 172.17.8.102  -

And try setting a key into etcd which will be reflected on another machine in the cluster. We can run commands on other machines in the cluster by using fleetctl ssh and address them by the ID that is listed from list-machines. Be sure to replace the ID with an actual value from your list-machines:

$ etcdctl set first-etcd-key "Hello World"
Hello World
$ fleetctl ssh cb35b356 etcdctl get first-etcd-key
Hello World

There is a ton of functionality inside of fleet that won't be covered in this blog, but as our final task of trying out our new cluster let's schedule a job. Create a file called hello-fleet.service:

$ cat > hello-fleet.service
[Service]
ExecStart=/usr/bin/bash -c "while true; do echo 'Hello Fleet'; sleep 1; done"

Now tell fleet to start this service:

$ fleetctl start hello-fleet.service
Job hello-fleet.service scheduled to 517d1c7d.../172.17.8.101
$ fleetctl list-units
UNIT      LOAD  ACTIVE  SUB DESC  MACHINE
hello-fleet.service loaded  active  running - 517d1c7d.../172.17.8.101

Start Using Your Cluster

You now have a powerful little cluster on your laptop, complete with job scheduling, a distributed data store and a self-updating operating system. From here it is a choose-your-adventure of exploring fleet and etcd using the guides that we have put together.

And stay tuned for more blog posts that will outline how to use CoreOS and user-data to bootstrap clusters on other platforms.