Posted on · 7 min to read · 1356 words

At KubeCon 2023 Amsterdam I attended a talk about Buildpacks. This is not the first time I heard about Buildpacks, but the talk in the maintainer track was very entertaining and the features like advanced caching and reproducibility sounded very convincing. So in this post I'll try to replace an existing build pipeline based on pure docker with Buildpacks. I'll also go a bit in detail on how to use buildpacks and show differences in the resulting images.

Why use Buildpacks instead of plain docker build?

You can find all features on this overview page, but in my opinion the most relevant features are:

  1. Bill-of-Materials

I talked about the risks of adding dependencies in an earlier post. Adding a Software Bill of Materials (SBOM) is the first step to care about your dependencies, as you will at least provide a list of dependencies with the released artifact of your application.

  1. Advanced Caching

This will speed up your builds, especially if you just change a single line of code. In my short experience with Buildpacks, this works very well and after an initial build to fill up caches, it is very fast.

  1. Minimal app image

We will come later to this, but in short images are relatively small, but larger than a multi-stage build image using a scratch image as second base.

  1. Reproducibility

One downside if reproducibility is that the creation timestamp of the image will be ~40 years in the past. But you can easily adjust that with supplying --creation-time now as additional argument to pack. Otherwise, this is a very much needed feature (as for example go already produces reproducible images in almost all cases). [1]

One major thing that is missing as of now is support to sign images. There is an RFC, but it seems rather inactive. [2]

Buildpacks Concepts

All of this is based on this upstream documentation.

Buildpacks uses builders to produce OCI images. These builders can themselves consist of multiple buildpacks and a single buildpack is normally used to target a single language to build e.g. go or Python. That combination of buildpacks and some lifecycle layer from buildpacks itself is then used as a builder image. This builder image will at build time combined with your app code and will then build the actual OCI app image. [3]

If a builder can't detect any relevant files (e.g. a go.mod) and therefore can't build, it will simply fail. The interesting part is, that you can have multiple buildpacks in a builder, so for example a go builder to build your backend and a Node builder to build your frontend.

Instead of choosing a base image with Docker, you now need to decide on which builder to use. There are multiple providers available for builders, even from public cloud providers like Google, but a good source seems to be the Paketo Builders.

Comparison

Let's say we want to containerize a simple Go web service and instead of using a Dockerfile and docker/podman to build it, we want to use Buildpacks. As always you can find all related files in this GitHub repository.

The Dockerfile looks like this:

FROM golang:1.20-alpine AS builder

COPY . /app
WORKDIR /app

RUN go get ./...

RUN go build -o test ./...



FROM scratch

COPY --from=builder /app/test /test

ENTRYPOINT ["/test"]

As stated earlier for Buildpacks we don't need a Dockerfile, but a builder we want to use. In my case I decided to go with the paketo-buildpacks/go buildpack and the paketobuildpacks/builder:tiny builder. [4] You can reproduce this in the earlier mentioned repository by running make size.

The size of the resulting images is depending on the used builder (tiny or base):

    Name                    Size
buildpacks-test-pack-go    107MB
buildpacks-test-pack-tiny  30.5MB
buildpacks-test-docker     6.62MB

The resulting Docker image size is 6.62MB. The resulting Buildpacks image size is 30.5MB with the tiny builder and 107MB with the base builder.

Let's take a dive and figure out what is in these images. Dive provides an easy way to see how much waste there is in your image by enabling the CI=true mode. To be clear, this is in no way perfect to compare the results here, but a good indicator of what parts of the images are maybe not needed at all.

ImageEfficiencyWasted BytesUser Wasted Percent
buildpacks-test-pack-go92.1218 %15691768 bytes (16 MB)36.1717 %
buildpacks-test-pack-tiny99.9975 %1497 bytes (1.5 kB)0.0116 %
buildpacks-test-docker100 %0 bytes (0 B)NaN %

As expected the docker image build by docker has 0 bytes wasted as it was based on a scratch image, and we just copied the go binary in the image. But the buildpacks-tiny image is also very close with an efficiency of 99.9975 % and only 1.5 kB wasted. As expected the buildpacks-go image, which is based on paketobuildpacks/builder:base (so not tiny), is the largest image with 16 MB wasted. The largest file in that image seems to be a perl runtime at /usr/bin/perl with 4.2 MB.

I'm not entirely sure how dive calculates these metrics, as we know that our binary has the size of ~6.6 MB, that leaves us with more than 20 MB in the buildpacks-test-pack-tiny image, that is maybe not needed. Parts of it could be the SBOM or vendored modules, but as said earlier I'm not sure about this.

$ CI=true dive buildpacks-test-pack-go

Using default CI config
Image Source: docker://buildpacks-test-pack-go
Fetching image... (this can take a while for large images)
Analyzing image...
  efficiency: 92.1218 %
  wastedBytes: 15691768 bytes (16 MB)
  userWastedPercent: 36.1717 %
Inefficient Files:
Count  Wasted Space  File Path
    2        4.2 MB  /usr/bin/perl
    2        1.3 MB  /var/cache/debconf/templates.dat
    2        1.1 MB  /usr/lib/x86_64-linux-gnu/perl-base/auto/re/re.so
$ CI=true dive buildpacks-test-pack-tiny

Using default CI config
Image Source: docker://buildpacks-test-pack-tiny
Fetching image... (this can take a while for large images)
Analyzing image...
  efficiency: 99.9975 %
  wastedBytes: 1497 bytes (1.5 kB)
  userWastedPercent: 0.0116 %
Inefficient Files:
Count  Wasted Space  File Path
    2        1.0 kB  /etc/os-release
    2         339 B  /etc/passwd
    2         140 B  /etc/group
$ CI=true dive buildpacks-test-docker

Using default CI config
Image Source: docker://buildpacks-test-docker
Fetching image... (this can take a while for large images)
Analyzing image...
  efficiency: 100.0000 %
  wastedBytes: 0 bytes (0 B)
  userWastedPercent: NaN %
Inefficient Files:
Count  Wasted Space  File Path
None

SBOM

After talking about the SBOM multiple times now, I of course want to see how it looks for my images. To the SBOM you can use pack sbom download <image>. This will export the SBOM layer of the image in the local directory in the folder layers.

So for the tiny image it looks like this:

$ tree layers

layers
└── sbom
    └── launch
        ├── paketo-buildpacks_ca-certificates
           └── helper
               └── sbom.syft.json
        ├── paketo-buildpacks_go-build
           └── targets
               ├── sbom.cdx.json
               ├── sbom.spdx.json
               └── sbom.syft.json
        └── sbom.legacy.json

7 directories, 5 files

There are multiple SBOMs in 3 formats CycloneDX, Syft and Spdx. All of them look a bit different, but contain more or less the same content, like used go compiler version, build flags and dependencies (that is why I've added zap as logger dependency). [5]

Using Buildpacks on GitLab CI

I was thinking about using it in a container, like with a docker in docker build setup, but in the end I reused my existing shell runner.

There are ways on how to use it directly within a container, but these are rather cumbersome in comparison to using plain pack. [6]

Therefore, I deduced to just reuse an existing shell runner, and install pack on it. This has the large disadvantage of still needing a privileged runner with root permissions, but in theory you could use Podman (or something else) instead of docker in your runner and with this build images without root permissions needed (which is equal to be in the docker group).

With that I used the following GitLab CI configuration:

variables:
    BUILDPACK_VERSION: 0.29.0

backend-build:
    tags:
        - docker-build
    stage: build
    before_script:
        - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    script:
        - cd backend
        - (curl -sSL "https://github.com/buildpacks/pack/releases/download/v${BUILDPACK_VERSION}/pack-v${BUILDPACK_VERSION}-linux.tgz" | tar -C . --no-same-owner -xzv pack)
        - ./pack build gitlab.cloudf.de:4567/andre/shorty/backend:latest --path . --buildpack paketo-buildpacks/go --builder paketobuildpacks/builder:tiny --creation-time now
        - docker push gitlab.cloudf.de:4567/andre/shorty/backend:latest

The build times are also very good, once the first build has passed and layers can be reused. For a production project, I've seen build times of < 1 minute with code changes in the code and very limited network upload speeds.

Conclusion

If you know how to write Dockerfiles, Buildpacks might be a bit weird at the beginning as there is no need to define all dependencies needed by your application in a file. All of this complexity moved in the builders, which can be beneficial as you can provide your own images, for example in a business environment.

The downside is that you have limited control of what will be in your image at the end, as the builder has full control over it.

In general I like the speed and the earlier mentioned capabilities of Buildpacks. The only thing missing is signing support, but otherwise I'll try Buildpacks for some private projects in the future and may think about using it at work (but with custom builders).

Sources


  1. https://github.com/golang/go/issues/57120

  2. https://github.com/buildpacks/rfcs/pull/195

  3. https://buildpacks.io/docs/concepts/#what-is-a-builder

  4. https://paketo.io/docs/howto/go/

  5. https://buildpacks.io/docs/features/bill-of-materials/

  6. https://blog.codecentric.de/en/2021/10/gitlab-ci-paketo-buildpacks/