Launching Alpine Linux on Firecracker like a boss

One step back, two steps forward; create a base image to investigate Firecracker further
thumbnail

The quest to launch an ETCD cluster on Firecracker starts here.

In this post, I’m describing how I’ve built my initial Alpine 3.13 VMM with OpenSSH and a dedicated sudoer user. In AWS, when one launches a Ubuntu instance, one can access it via ssh ubuntu@<address>, a CentOS VM is ssh centos@<address>. At the end of this write up, I’ll have ssh alpine@<address>. This VMM will have access to the outside world so I can install additional software and even ping the BBC! For the networking, I’ll use the Docker docker0 bridge; inspired again by Julia Evans, the Day 41: Trying to understand what a bridge is[1] was very helpful. I will look at my own networking setup in future write ups.

The result is a refinement of the process from my previous Firecracker articles.

§Dockerfile

The root file system is built from an Alpine 3.13 Docker image.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
FROM alpine:3.13
RUN apk update \
	&& apk add openrc openssh sudo util-linux \
	&& ssh-keygen -A \
	&& mkdir -p /home/alpine/.ssh \
	&& addgroup -S alpine && adduser -S alpine -G alpine -h /home/alpine -s /bin/sh \
	&& echo "alpine:$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n1)" | chpasswd \
	&& echo '%alpine ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/alpine \
	&& ln -s agetty /etc/init.d/agetty.ttyS0 \
	&& echo ttyS0 > /etc/securetty \
	&& rc-update add agetty.ttyS0 default \
	&& rc-update add devfs boot \
	&& rc-update add procfs boot \
	&& rc-update add sysfs boot \
	&& rc-update add local default
COPY ./key.pub /home/alpine/.ssh/authorized_keys
RUN chown -R alpine:alpine /home/alpine \
	&& chmod 0740 /home/alpine \
	&& chmod 0700 /home/alpine/.ssh \
	&& chmod 0400 /home/alpine/.ssh/authorized_keys \
	&& mkdir -p /run/openrc \
	&& touch /run/openrc/softlevel \
	&& rc-update add sshd

Plenty but rather straightforward, let’s break it down:

  1. update the source packages and install required packages:
  • openrc because an init system in required
  • openssh and other other packages so there is a minimalistic system that can be accessed and used after launch
1
2
apk update \
	&& apk add openrc openssh sudo util-linux \
  1. generate host keys:
1
    && ssh-keygen -A \
  1. create the home directory structure for the alpine user:
1
	&& mkdir -p /home/alpine/.ssh \
  1. create the alpine group and the user, assign home directory, init shell and a random password; without the password the user account stays disabled and it’s not possible to SSH as that user:
1
2
	&& addgroup -S alpine && adduser -S alpine -G alpine -h /home/alpine -s /bin/sh \
	&& echo "alpine:$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n1)" | chpasswd \
  1. make the user a password-less sudoer:
1
	&& echo '%alpine ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/alpine \
  1. mount special file systems on boot and enable local services:
1
2
3
4
5
6
7
	&& ln -s agetty /etc/init.d/agetty.ttyS0 \
	&& echo ttyS0 > /etc/securetty \
	&& rc-update add agetty.ttyS0 default \
	&& rc-update add devfs boot \
	&& rc-update add procfs boot \
	&& rc-update add sysfs boot \
	&& rc-update add local default
  1. copy the generated public key to authorized keys, there’s a single key so add directly to authorized_keys:
1
COPY ./key.pub /home/alpine/.ssh/authorized_keys
  1. finally, apply settings required to access the system via SSH:
  • OpenSSH is picky about home and .ssh directory permissions so I make sure these are correct: 0740 for home, 0700 for $HOME/.ssh and 0400 for the keys file
  • enable OpenSSH and make sure it starts when the system starts
1
2
3
4
5
6
7
RUN chown -R alpine:alpine /home/alpine \
	&& chmod 0740 /home/alpine \
	&& chmod 0700 /home/alpine/.ssh \
	&& chmod 0400 /home/alpine/.ssh/authorized_keys \
	&& mkdir -p /run/openrc \
	&& touch /run/openrc/softlevel \
	&& rc-update add sshd

I have a /firecracker directory structure which I described in Taking Firecracker for a spin[2]. The Dockerfile is saved in /firecracker/docker/alpine-3.13/Dockerfile.

§File system

Now I put together the program to start the container and extract the file system. The program is saved as /firecracker/docker/create-alpine-3.13.sh and goes like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#!/bin/bash
set -eu
build_dir="/tmp/alpine-build"
dockerfile="/firecracker/docker/alpine-3.13/Dockerfile"
filesystem_target="/firecracker/filesystems/alpine-base-root.ext4"
key_file="alpine"
image_tag="local/alpine-base:latest"
pre_build_dir=$(pwd)

echo "Generating a keypair..."
set +e
ssh-keygen -t rsa -b 4096 -C "alpine@firecracker" -f "${HOME}/.ssh/${key_file}"
set -e

First, I’m setting up the build context and generating a key pair. ssh-keygen is smart to check if the key pair already exists and answering no will prevent it from overwriting on every run.

In the Dockerfile, I was using a build local key.pub for the image (step 8). Here’s how I make sure it exists:

1
2
3
4
5
echo "Creating build directory..."
mkdir -p "${build_dir}" && cd "${build_dir}"

echo "Copying public key to the build directory..."
cp "${HOME}/.ssh/${key_file}.pub" "${build_dir}/key.pub"

Next, bring the Dockerfile to the build directory and build the Docker image. Tag the image with a known name. If the docker build fails, the program will report that fact and exit.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
echo "Building Docker image..."
cp "${dockerfile}" "${build_dir}/Dockerfile"
docker build -t "${image_tag}" .
retVal=$?
cd "${pre_build_dir}"
rm -r "${build_dir}"

if [ $retVal -ne 0 ]; then
        echo " ==> build failed with status $?"
        exit $retVal
fi

The next step is to prepare the root file system:

1
2
3
4
5
6
echo "Creating file system..."
mkdir -p "${build_dir}/fsmnt"
dd if=/dev/zero of="${build_dir}/rootfs.ext4" bs=1M count=500
mkfs.ext4 "${build_dir}/rootfs.ext4"
echo "Mounting file system..."
sudo mount "${build_dir}/rootfs.ext4" "${build_dir}/fsmnt"

and start the container:

1
2
echo "Starting container from new image ${image_tag}..."
CONTAINER_ID=$(docker run --rm -v ${build_dir}/fsmnt:/export-rootfs -td ${image_tag} /bin/sh)

followed by copying everything out of the container to the file system file. I do it the same way as with the Vault VMM root file system in my previous articles.

I could combine the first two commands together but I decided to keep them separate to distinguish what belongs to the file system and what’s mine, in this case that’s the /home directory alone:

1
2
3
4
echo "Copying Docker file system..."
docker exec ${CONTAINER_ID} /bin/sh -c 'for d in home; do tar c "/$d" | tar x -C /export-rootfs; done; exit 0'
docker exec ${CONTAINER_ID} /bin/sh -c 'for d in bin dev etc lib root sbin usr; do tar c "/$d" | tar x -C /export-rootfs; done; exit 0'
docker exec ${CONTAINER_ID} /bin/sh -c 'for dir in proc run sys var; do mkdir /export-rootfs/${dir}; done; exit 0'

When everything is copied, unmount the file system, stop the container and clean up:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
echo "Unmounting file system..."
sudo umount "${build_dir}/fsmnt"

echo "Removing docker container..."
docker stop $CONTAINER_ID

echo "Moving file system..."
mv "${build_dir}/rootfs.ext4" "${filesystem_target}"

echo "Cleaning up build directory..."
rm -r "${build_dir}"

echo "Removing Docker image..."
docker rmi ${image_tag}

echo " \\o/ File system written to ${filesystem_target}."

To run it simply execute /firecracker/docker/create-alpine-3.13.sh. On my machine, assuming that I already have alpine:3.13 Docker image, the process takes about 20 seconds.

§Networking

The resulting VMM would be useless without access to the outside world. My previous write ups didn’t discuss any of that, none of those VMMs were able reach the internet.

Here, I’m using the method from Julia’s article - use the docker0 bridge. This is really straightforward. I have the following /firecracker/docker/tap-alpine-3.13.sh program:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#!/bin/bash
set -eu
sudo apt-get install bridge-utils -y
# create and configure a tap device
# to launch firecracker VMM on the docker0 bridge
TAP_DEV=alpine-test
CONTAINER_IP=172.17.0.42
GATEWAY_IP=172.17.0.1
DOCKER_MASK_LONG=255.255.255.0
sudo ip tuntap add dev "$TAP_DEV" mode tap
sudo brctl addif docker0 $TAP_DEV
sudo ip link set dev "$TAP_DEV" up
# as Julia Evans, I also need to figure out the meaning of this:
sudo sysctl -w net.ipv4.conf.${TAP_DEV}.proxy_arp=1 > /dev/null
sudo sysctl -w net.ipv6.conf.${TAP_DEV}.disable_ipv6=1 > /dev/null

The gateway IP and mask come from the docker0 bridge:

1
2
3
4
5
$ ip addr show docker0
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:3c:de:fe:d5 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

The IP address of the VMM is an arbitrary selection.

Run it with /firecracker/docker/tap-alpine-3.13.sh, the outcome will be similar to:

1
2
3
$ ip link show alpine-test
13: alpine-test: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel master docker0 state DOWN mode DEFAULT group default qlen 1000
    link/ether b6:53:d1:78:ee:2d brd ff:ff:ff:ff:ff:ff

Time to configure the VMM.

§VMM configuration file

A couple of things to take a note of:

  • ip=172.17.0.42::172.17.0.1:255.255.255.0::eth0:off is of the format ip=${VMM_IP}::${GATEWAY_IP}:{DOCKER_MASK_LONG}::${VMM_INTERFACE_ID}:off
  • network-interfaces[0].host_dev_name matches the value of $TAP_DEV
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
cat <<EOF > /firecracker/configs/alpine-config.json
{
  "boot-source": {
    "kernel_image_path": "/firecracker/kernels/vmlinux-v5.8",
    "boot_args": "ro console=ttyS0 noapic reboot=k panic=1 pci=off nomodules random.trust_cpu=on ip=172.17.0.42::172.17.0.1:255.255.255.0::eth0:off"
  },
  "drives": [
    {
      "drive_id": "rootfs",
      "path_on_host": "/firecracker/filesystems/alpine-base-root.ext4",
      "is_root_device": true,
      "is_read_only": false
    }
  ],
  "network-interfaces": [
      {
          "iface_id": "eth0",
          "guest_mac": "02:FC:00:00:00:05",
          "host_dev_name": "alpine-test"
      }
  ],
  "machine-config": {
    "vcpu_count": 1,
    "mem_size_mib": 128,
    "ht_enabled": false
  }
}
EOF

§Run the VMM

To start the VMM, simply execute:

1
sudo firecracker --no-api --config-file /firecracker/configs/alpine-config.json

About two seconds later:

 * Mounting misc binary format filesystem ... [ ok ]
 * Mounting /sys ... [ ok ]
 * Mounting security filesystem ... [ ok ]
 * Mounting debug filesystem ... [ ok ]
 * Mounting SELinux filesystem ... [ ok ]
 * Mounting persistent storage (pstore) filesystem ... [ ok ]
 * Starting local ... [ ok ]

Welcome to Alpine Linux 3.13
Kernel 5.8.0 on an x86_64 (ttyS0)

172 login:

§SSH into the VMM

In another terminal:

1
ssh -i ~/.ssh/alpine alpine@172.17.0.42
The authenticity of host '172.17.0.42 (172.17.0.42)' can't be established.
ECDSA key fingerprint is SHA256:gYxEJdQIXM3242/yV/RV9qVQBaGSdLoUtpFSmBKEyHE.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.42' (ECDSA) to the list of known hosts.
Enter passphrase for key '/home/radek/.ssh/alpine':
Welcome to Alpine!

The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org/>.

You can setup the system with the command: setup-alpine

You may change this message by editing /etc/motd.
1
2
172:~$ sudo sh
172:/home/alpine# apk update
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
v3.13.1-115-gf65775dfbc [https://dl-cdn.alpinelinux.org/alpine/v3.13/main]
v3.13.1-117-g6a5e33f63c [https://dl-cdn.alpinelinux.org/alpine/v3.13/community]
OK: 13880 distinct packages available
1
172:/home/alpine# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=58 time=18.995 ms
64 bytes from 1.1.1.1: seq=1 ttl=58 time=15.660 ms
64 bytes from 1.1.1.1: seq=2 ttl=58 time=16.246 ms
64 bytes from 1.1.1.1: seq=3 ttl=58 time=17.889 ms
^C
--- 1.1.1.1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 15.660/17.197/18.995 ms
1
172:/home/alpine# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 02:fc:00:00:00:05 brd ff:ff:ff:ff:ff:ff
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
172:/home/alpine# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 02:fc:00:00:00:05 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.42/24 brd 172.17.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::fc:ff:fe00:5/64 scope link
       valid_lft forever preferred_lft forever

Nice. Everything is working as expected. The UX is not fully complete, to ping stuff I do have to sudo. Whatever, if I can ping the BBC, I’m good:

1
172:/home/alpine# ping bbc.co.uk
PING bbc.co.uk (151.101.64.81): 56 data bytes
64 bytes from 151.101.64.81: seq=0 ttl=58 time=23.371 ms
64 bytes from 151.101.64.81: seq=1 ttl=58 time=20.238 ms
64 bytes from 151.101.64.81: seq=2 ttl=58 time=24.788 ms
64 bytes from 151.101.64.81: seq=3 ttl=58 time=24.047 ms
^C
--- bbc.co.uk ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 20.238/23.111/24.788 ms

§Next steps

Next time I am going to look at setting up the network with IPAM so the IP addresses are assigned from a given range.

That’s it for today:

1
172:/home/alpine# reboot
172:/home/alpine# Connection to 172.17.0.42 closed by remote host.
Connection to 172.17.0.42 closed.