Building Packages with Docker and Hosting Them in a Private APT Repository on AWS S3

Craig Buchanan

Craig Buchanan / January 16, 2017

Recently, I’ve needed to automate the installation of a specific package on an Ubuntu machine. Normally, this task would be as easy as apt-get install <pkgname>, but the officially hosted package on the Ubuntu APT repository was not built with the required configure options. The most maintainable solution to this problem then was to download the source, compile it with the right configure options on the target operating system distribution, and upload the resulting package to a privately hosted APT repository so that it could later be downloaded with apt-get install.

The challenge of compiling the source code on the target operating system distribution turned out to be an annoying issue since the target operating system was a completely different operating system than my development environment. One potential solution would be to spin up a virtual machine of the target operating system distribution, run the build script on that virtual machine, and then copy the newly built package to my development machine where I could upload it to the private APT repository (I did not want to upload the resulting package to the APT repository directly from the VM since I did not want to copy my secret credentials of the APT repository onto another machine). Although this solution would work, it would be very annoying and time-consuming to spin up a new VM, run a script on that VM, and then copy files to a different machine every time there was a change in the build script, build process, or source code itself. Annoying and time-consuming tasks are the enemy of a fully automated system, so I wanted to find a way to run the entire process from my development machine with a single command. The answer was containerization. Specifically, Docker.

The more convenient solution is basically the same as the annoying solution except the VM is replaced with a Docker container. A single script on the development machine creates a new Docker container of the target operating system distribution, copies build scripts into the container, runs the build scripts inside the container, and copies the resulting packages back to the development machine where they can be uploaded to the private APT repository.

As an example, I will walk through the specific solution of building Squid 3.5 with SSL enabled for Ubuntu 16.04 and upload the resulting packages to a private APT repository hosted on AWS S3.

Build_and_push_squid.sh

#!/bin/bash

# This script is used to build Squid inside a container and then push the
# resulting deb files to a private apt repo.

UBUNTU_PACKAGE_S3_BUCKET=examplebucket-ubuntu-packages
SQUID_ARCH=amd64
SQUID_PACKAGES=(
  squid-common_3.5.12-1ubuntu7.2_all.deb
  squid_3.5.12-1ubuntu7.2_amd64.deb
  squidclient_3.5.12-1ubuntu7.2_amd64.deb
)
CONTAINER_ARTIFACT_PATH=/build-scripts/build/squid3/
LOCAL_ARTIFACT_PATH=build/
IMAGE_NAME=secure_nat_builder
IMAGE_VERSION=0.0.1
APT_CODENAME=xenial
APT_COMPONENT=myrepo
APT_KEY=23955501

# Build Squid while building container
docker build -t $IMAGE_NAME:$IMAGE_VERSION .

# Copy Squid package artifacts (deb) to local host from container
container_id=$(docker create $IMAGE_NAME:$IMAGE_VERSION)
mkdir -p $LOCAL_ARTIFACT_PATH
for pkg in ${SQUID_PACKAGES[*]}; do
  docker cp $container_id:$CONTAINER_ARTIFACT_PATH/$pkg $LOCAL_ARTIFACT_PATH
done
docker rm -v $container_id

# Push Squid packages to ubuntu repo on S3
#
# Requires deb-s3:
#   $ gem install deb-s3
#
# Set AWS_PROFILE env var on this script to set which AWS profile to use to
# access the S3 bucket.
for pkg in ${SQUID_PACKAGES[*]}; do
  deb-s3 upload \
    --bucket $UBUNTU_PACKAGE_S3_BUCKET \
    --arch $SQUID_ARCH \
    --codename $APT_CODENAME \
    --component $APT_COMPONENT \
    --sign $APT_KEY \
    $LOCAL_ARTIFACT_PATH/$pkg
done

Dockerfile

FROM ubuntu:xenial

COPY build-scripts build-scripts

WORKDIR build-scripts
RUN ./01_install_buildtools.sh
RUN ./02_build_squid.sh

The first action in this script is to create a Docker container. On creation, the Docker container runs the build scripts and places the resulting packages into the $CONTAINER_ARTIFACT_PATH directory in the container. The script then copies each package from the $CONTAINER_ARTIFACT_PATH directory on the container to the $LOCAL_ARTIFACT_PATH directory on the development machine. Once all of the packages are on the development machine, the development machine pushes them to the APT repository hosted on AWS S3 using deb-s3 with the AWS credentials configured on the development machine.

Get the rest of the code here on Github.