The docker API can be used to control the docker daemon using REST calls. This can come in handy when controlling docker on a remote machine, or writing scripts to automate the creation, startup, shutdown and destruction of docker containers. Using it comes with some differences from the terminal interface or docker-compose though, so this is how i got started.

{{‘

Overview

’}}

Quickstart - how to find out what to do if you know your way around the terminal

This is what helped me the most to figure out differences between using docker via the terminal and using it by accessing the API directly. Stop the docker daemon (make sure nothing of importance is running first though):

sudo systemctl stop docker
Then start it again in debug mode:
sudo dockerd --debug
Now open a new terminal and use docker in it. The debug mode shows all the endpoints that are being called and the JSONs being sent: They might be more verbose than what you will send to the API, but it will show you everything you need. As an example here is the output i got from running docker run alpine:latest.

Quickstart 2 - My script

This is just the script without any explanation aside from the comments. A more detailed approach can be found below.

After you have exposed the docker API (see further below for a how-to) these are the basic REST calls is used put into a Python 3 script using the Requests library. It will pull an image from a password protected registry, start a container with a mounted volume and a port exposed to the host machine, then delete the container and its images:

import requests
import base64
import json
import sys
import time

# Docker API base url
url = "http://localhost:2376/v1.39/"
if len(sys.argv) != 5:
    print("Usage: ./create-container.py image-name image-tag container-name volume-file")
    exit(0)

# Authentication data for our registry, needs to be encoded in base64
auth_data = base64.b64encode(
    '{"username": "$user","password": "$password",
        "serveraddress": "https://docker.anarcon.org"}'.encode())
# Pull image from registry
resp_pull_img = requests.post(url + 'images/create?fromImage=%s&tag=%s' % (sys.argv[1], sys.argv[2]),
                                headers={'X-Registry-Auth': auth_data})
if not resp_pull_img.ok:
    print("Error getting %s:%s: " % (sys.argv[1], sys.argv[2]))
    exit(1)
print("Pulled %s:%s" % (sys.argv[1], sys.argv[2]))

# Create a container
create_container_payload = {
    "Image": sys.argv[1] + ':' + sys.argv[2],
    # Port needs to be exposed to be bound
    "ExposedPorts": {
        "8080": {}
    },
    "HostConfig": {
        "Binds": [
            "jira-home:/var/atlassian/jira",
            "jira-logs:/opt/atlassian/jira/logs"
        ],
        # Now the port can be bound
        "PortBindings": {
            "8080": [
                {
                    "HostIp": "",
                    "HostPort": "2990"
                }
            ]
        }
    }
}

resp_create_cont = requests.post(url + 'containers/create?name=' + sys.argv[3],
                                    json=create_container_payload)
if not resp_create_cont.ok:
    print("Error creating container: " + resp_create_cont.text)
    exit(1)
print(json.dumps(resp_create_cont.json(), indent=4))
cont_id = resp_create_cont.json()['Id']

# Extract tar.gz archive to jira home directory to recreate previous state
with open(sys.argv[4], 'rb') as f:
    headers = {"Content-Type": "application/gzip"}
    resp_extract_archive = requests.put(url + 'containers/%s/archive?path=/var/atlassian/jira' 
                                        % (cont_id), data=f, headers=headers)
    if not resp_extract_archive.ok:
        print("Error extracting archive!")
        exit(1)
    print("Extracted content to volume")

# Start the container!
resp_start_cont = requests.post(url + "containers/" + cont_id + "/start")
if not resp_start_cont.ok:
    print ("Error starting container: " + resp_start_cont.text)
    exit(1)
print("Successfully started container.")

# Force delete the created container
requests.delete('http://localhost:2376/containers/' + cont_id + '?force=true')
 
# Now delete both volumes
requests.delete('http://localhost:2376/volumes/jira-home')
requests.delete('http://localhost:2376/volumes/jira-logs')

Not so quick start - the what, how and why

So what is the point of this script?
In this form there isn’t really one: It starts up a docker container running Jira, then immediately destroys it and deletes everything associated with it.
In its real use case this script is separated in two: One part setting up the container and starting it, the other one tearing it down. In between those a plugin is uploaded into Jira and integration tests are being run. All of this is being controlled by the Exec Maven Plugin.

The fact that this is part of the maven life cycle comes with the caveat that the process of starting and stopping the container ideally should work both on the developer machines as well as the build agent - a docker container itself. To prevent me from having to use a docker-in-docker solution i use the docker api to access the hosts docker installation from the container. This solution is even more flexible than binding the docker socket into the container as it can be repurposed for starting docker containers on remote machines.

And why use a registry? Because i built my own jira image containing Core, Service Desk and Software. It is based on the images by cptactionhank.

So what’s needed to enable the docker API and how do you use it?

Preparation

Enabling the docker API

After these changes everybody that has access to port 2376 can control docker on your machine. Ubuntu does block most incoming connections by default, so the API is only usable from the machine itself. This can be changed, but you should know what you’re doing! If you are trying to access the docker API from your network or via the internet you should take a minute to setup a secure connection, then open the port accordingly.

Note: This part assumes an Ubuntu environment. Other operating systems should behave in a similar fashion.

By default docker only exposes its API via a socket, so i need to open a port and then restart the daemon to apply the changes. To open the port edit the docker.service file with root permissions:

sudo vim /lib/systemd/system/docker.service 
Append the port you want to open to the ExecStart argument. Example using port 2376:
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376
Now restart the docker daemon to apply the change:
sudo systemctl daemon-reload
sudo systemctl restart docker.service

Execution

The following code works with the docker api in version 1.39. The variable url has the value "http://localhost:2376/v1.39/", which makes the script compatible to any future docker versions, even if they use a newer API-version by default.

Pulling the image

When using the API to start a container the image it is based on has to be available locally, which isn’t the case if you’re using the terminal to start the same container. To make sure container creation doesn’t fail it’s always smart to pull the image first. If there is already a local image with the same tags, docker won’t download it again.
To pull an image use the /images/create endpoint with the fromImage and tag parameters. If the image isn’t in the docker hub you can also send authentication data with the X-Registry-Auth header. It needs to be a JSON containing username, password and serveraddress, encoded in base64.
In my script this is the section pulling the image, using the first two input arguments as image name and tag:

auth_data = base64.b64encode(
    '{"username": "$user","password": "$password",
        "serveraddress": "https://docker.anarcon.org"}'.encode())
resp_pull_img = requests.post(url + 'images/create?fromImage=%s&tag=%s' % (sys.argv[1], sys.argv[2]),
                                headers={'X-Registry-Auth': auth_data})

Creating a container

The interesting part about creating a container is the payload, because defining certain properties needs to be done twice or not where it would seem to from first glance at the documentation. Ports must first be exposed in the general section, then bound to a specific port on the host machine in the HostConfig section. (Named) volumes are created as binds in the HostConfig. The payload for creating a container might look something like this:

{
    "Image": sys.argv[1] + ':' + sys.argv[2],
    "ExposedPorts": {
        "8080": {}
    },
    "HostConfig": {
        "Binds": [
            "jira-home:/var/atlassian/jira",
            "jira-logs:/opt/atlassian/jira/logs"
        ],
        "PortBindings": {
            "8080": 
                [{  "HostIp": "",
                    "HostPort": "2990"
                }]
        }
    }
}

Then send the whole thing to the containers/create endpoint with a name query parameter:

resp_create_cont = requests.post(url + 'containers/create?name=$containerName',
                                 json=create_container_payload)

Getting data into the container

I have a Tar GZip archive of the jira home directory which defines the exact state m<> new container should be in. It contains licenses, installed plugins and already configured projects and customfields, among other things
I will extract this archive into the container before starting it. While usually I would just mount a volume from the local filesystem into the container, as i am myself inside a container this will lead to a whole lot of problems.
Extracting data to a container is actually pretty straightforward, as long as you remember to put the path in the path query parameter and set the correct Content-Type header:

with open("path/to/backupfile.tar.gz", 'rb') as f:
    headers = {"Content-Type": "application/gzip"}
    resp_extract_archive = requests.put(url + 'containers/' + cont_id + '/archive?path=/var/atlassian/jira',
                                        data=f, headers=headers)

Starting the container

Now i start the previously created container with a simple POST-command:

resp_start_cont = requests.post(url + "containers/" + cont_id + "/start")

That’s it, the created container is running!

Deleting the container and volumes

Deleting the container makes use of the DELETE-Request, by adding the query-param force=true i also enable the deletion of a still running container. Then i delete the volumes i created earlier:

requests.delete('http://localhost:2376/containers/' + cont_id + '?force=true')
 
requests.delete('http://localhost:2376/volumes/jira-home')
requests.delete('http://localhost:2376/volumes/jira-logs')

Special: Checking container status

For my use case i needed Jira to be running before i can start any integration tests. So i check if the healthcheck-endpoint is available and sending a 200 Response.

timeout = 180
t = 0
jira_started = False
while t < timeout and not jira_started:
    time.sleep(1)
    t += 1
    try:
        started_answer = requests.get("http://localhost:2990/rest/troubleshooting/1.0/check/",
                                      headers={'Authorization': "Basic $base64EncodedAuth"})
        jira_started = started_answer.ok
        if not jira_started and t % 10 == 0:
            print("Waited for %s seconds, currently getting status %s." % 
                        (t, started_answer.status_code))
    except requests.exceptions.ConnectionError:
        jira_started = False
        if t % 10 == 0:
            print("Waited for %s seconds, jira did not start yet..." % t)
if not jira_started:
    print("Jira not started in time, something went probably wrong!")
    exit(1)
else:
    print("Jira started successfully.")
    exit(0)

Of course this is only useful if you’re trying to specifically start a jira instance that has also been already configured with the configuration (license, admin account, etc.) contained in the data i just pushed into the container.