Towards the end of the Firecracker VMM with additional disks article[1] I concluded that I didn’t know how to live resize an attached drive. It turns out it is possible and it’s very easy to do using the Firecracker VMM API.
To launch the VMM with the API, I have to drop the --no-api
argument (obviously) and use --api-sock
with the path to the socket file. In a production system, I’d use a directory other than /tmp
.
|
|
§The VMM API
Firecracker server exposes a Swagger documented API on a unix socket available under the --api-sock
path. There is an instance of the API per the socket file. In essence - one API for every VMM instance. A VMM will not start if the socket file exists.
Because firecracker
command is executed with elevated privileges and the socket file is owned by the elevated user, the curl
command has to be executed with elevated privileges.
Firecracker server API consumes and returns JSON. At the time of writing, the Swagger file can be looked at on GitHub[2]. Some introductory examples are available here[3].
The API offers two classes of operations: pre-boot and post-boot. The pre-boot ones are interesting. They are clearly documented as such. Quick glance shows that the boot device, network interfaces, drives and the balloon device can be called at the pre-boot stage. A VMM can also be launched from a snapshot.
There clearly exists the potential for an EBS, snapshots or ENI AWS-like management plane.
§Live resize the drive
Back to the topic. So, right now, there is no API call to list the drives attached to the VMM. Neither the /
nor the /machine-config
returns that info.
One has to either have access to the VMM config file or could use the MMDS (Microvm Metadata Service) to store that info on boot. I’ll have a look at the MMDS at some other time.
Yes, the drive. I do know the drive ID I was trying to previously resize. It’s the vol2
stored at /firecracker/filesystems/alpine-vol2.ext4
.
The API endpoint I’m investigating is the /drives/{drive_id}
. The POST
operation is the pre-boot one and it does complain when executing it on a running instance.
HTTP/1.1 400
Server: Firecracker API
Connection: keep-alive
Content-Type: application/json
Content-Length: 88
{"fault_message":"The requested operation is not supported after starting the microVM."}
§Just PATCH it?
The PATCH
allows me to tell the VMM that something about the underlying volume has changed. First, a glance at fdisk -l
output of the running VMM:
|
|
Disk /dev/vda: 500 MiB, 524288000 bytes, 1024000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/vdb: 500 MiB, 524288000 bytes, 1024000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
The device I’m interested in is the /dev/vdb
. Currently 500MiB
.
So is live resize as simple as…, surely not…, is it … ?
|
|
50+0 records in
50+0 records out
52428800 bytes (52 MB, 50 MiB) copied, 0.0442937 s, 1.2 GB/s
|
|
HTTP/1.1 204
Server: Firecracker API
Connection: keep-alive
Firecracker logs the following:
[ 148.225812] virtio_blk virtio1: [vdb] new size: 1126400 512-byte logical blocks (577 MB/550 MiB)
[ 148.227542] vdb: detected capacity change from 524288000 to 576716800
A quick look at fdisk -l
again:
|
|
Disk /dev/vdb: 550 MiB, 576716800 bytes, 1126400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Ah, yes! I could now use resize2fs
to extends the existing ext4
partition to fit the new size. Awesome!
Oh, and I can now shut the VMM down without SSH:
|
|
HTTP/1.1 204
Server: Firecracker API
Connection: keep-alive
|
|
-
Firecracker server Swagger file at the time of writing the article, version
v0.22.4