April 10, 2023
11
min read

Optimizing Rust Builds for Faster GitHub Actions Pipelines

Here we present a framework for accelerating enterprise Rust builds to improve feedback loops, collaboration, development frequency and CI/CD costs.

Why are Rust build times slow?

Rust is a high performance and developer friendly programming language

Rust has garnered attention and praise in the programming world due to its combination of safety, speed, concurrency and programmability. It has also proved to be an excellent choice for building enterprise applications. Due to the ease of onboarding for a project compared to languages like C/C++ which are not as memory safe and need years of experience for a developer to get started working, Rust proves to be a better option. It has also been the most loved programming language in the development community in recent years due to the innovative and efficient design choices made for the authors such as zero-cost abstraction and ownership, thus, focusing on the performance but not at the cost of programmability.

The compilation time bottleneck

Due to the nature of the language design, Rust build/compilation times are quite slow and can hinder developer productivity; introducing slow feedback loops as a direct product of the compilation times. The following xkcd comic sums up the problem with high compilation times and is very relevant in this scenario. The more a developer waits for their code to compile, the less they will be working on the product. This will affect the overall release time and can cause a butterfly effect through the product release process.

To help alleviate this problem, this blog post will present various strategies to optimize Rust build times which then later will be incorporated with GitHub Actions. This will arm the Rust developer to iterate faster and in turn help them work on projects efficiently and effectively.

Unraveling the butterfly effect of slow build speeds in Rust

Slow builds can significantly impact development speed and productivity in several ways. In enterprise projects where time is of the essence this can cause a large impact on the engineering habits of the team, their release cadence and future product planning. The following impacts can be seen if an application build is too slow and doesn’t allow developers to iterate fast:

  • Longer feedback loops: Slow builds result in longer feedback loops. Developers have to wait for the build process to complete before they can test their changes or receive automated test results. If you are using Uffizzi, your preview environment for the pull requests can take longer to build and deploy as well in this case. This waiting period induced by slow build times (20 mins on average) can lead to context switching to other tasks or idle time. This breaks the development flow. When getting back to the development process after build and test, the developer has to recontextualize with the codebase taking a little extra time and then reiterating on the program. This proves to be a longer feedback process, if the builds wouldn’t take as long as they did.
  • Hindered collaboration: Due to the longer feedback loops, there is a disconnect which can happen between engineering team members. These are caused as the individual developers themselves are taking a lot longer for development, moving the whole team slower than they should. Not as much knowledge is shared between the peers in a sprint, as should be, reducing collaboration and slow growth of the product.
  • Impaired deployment frequency: The overall deploy frequency is thus reduced as not enough bug fixes and features are shipped in time. This directly affects customer satisfaction if the customer has been waiting on some bug fix or a particular feature. Business agility is also impacted as the business itself is not able to move fast enough to reach set goals and get the ideal product to the customers. Inturn affecting the business’s ability to respond to market changes as a direct product of slow R&D and release cycles. Slow release cycles means new features don’t receive feedback fast enough and R&D can only be successful if they know what has and has not worked for the product in the past.
  • Reduced code quality: When builds take a long time, developers are strapped for time to make it to the release day and may be less inclined to write good code, causing low quality code merges. This can result in overall lower code quality and increase in the likelihood of introducing errors or regressions.
  • Increased CI/CD costs: Product build time directly affects CI/CD costs. By shaving off even a few minutes on a build, a lot of money can be saved. This can be especially problematic for enterprises with large-scale projects or multiple applications as the CI/CD cost impact increases by orders of magnitude.

How these challenges can be solved by optimizing the build pipeline

  • Faster feedback loops: If the build pipeline is optimized to an extent where the build doesn’t take much time and the codebase doesn’t leave the developer’s brain in that time, more changes to the codebase can be made in a lot less time.
  • Enhanced collaboration: Now that the developers are able to move fast, they would also have time to help out their peers in progressing hence improving the overall team morale and interest in developing the product
  • Improved deployment frequency: As the team is able to release faster now, the cadence of release can become faster. The overall expectations from the engineering team would be better set to align with the business goals.
  • Elevated code quality: With more time for iterations considering the builds, the developers can now try out different things with the code considering that they have time on their hands. This extra time which is not taken up by the build process can be used by the developer creatively to solve the problems at hand in a better way. Hence, promoting a culture of excellence in the engineering team.
  • Reduced CI/CD costs: The cost of CI/CD can drastically reduce after optimizing the build process. The build system can also be continuously monitored to look out for places where more optimizations can be done.

As seen above, there can only be good coming out of optimizing your Rust application’s build. The following section goes through strategies used to do the same.

Strategies for optimizing Rust application builds

Being decisive about which strategies to use and how to use them

The following strategies for optimizing Rust builds come with their own pros and cons. It is upto the user to decide what works best for them in their build use case.

The user needs to think about whether the build they are optimizing for is a developer, release, test or some other build. Figuring out the right combination of build optimizations for each helps the user develop and release smoothly.

Release builds of Rust applications tend to be much slower than the developer builds. This is due to the optimizations done by the compiler to have the smallest application binary possible, during a release build. Conclusively, the user has to be decisive about their individual build optimization choices. The following, are the strategies for building a optimzied build pipeline for Rust applications. These strategies can be used in tandem with each other:

Effective cache utilization

Caching is the most straighforward and also most crucial for speeding up build times. By caching the target directory and cargo registry, you can reduce the time spent on compiling dependencies significantly.

  • Cache the target directory: This directory contains build artifacts, and caching it will save time on subsequent builds.
  • Cache the cargo registry: This ensures that dependencies are not re-downloaded or recompiled unnecessarily.

For the above caching configuration, the popular  https://github.com/Swatinem/rust-cache github action can be used to ease the process of setting up and using the cache for Rust application builds.

- name: Cache dependencies
  uses: Swatinem/rust-cache@v2.2.1

Swatinem Github actions configuration

After the basic dependency caching above is covered, a smarter cache, sccache can be used as a compiler caching tool. It acts as a compiler wrapper and avoids compilation whenever possible. In this case we are ensuring that we are not just caching the dependencies but also the compile time artifacts which do not need to be recompiled on every build.

- name: Configure sccache
  run: | 
      echo "RUSTC_WRAPPER=sccache" >> $GITHUB_ENV
	    echo “SCCACHE_GHA_ENABLED=true" >> $GITHUB_ENV

- name: Run sccache-cache
  uses: mozilla-actions/sccache-action@v0.0.2
  

sccache Github actions configuration

The above set of github actions sets up sccache environment variables where RUSTC_WRAPPER dictates which compiler wrapper is to be used, and SCCACHE_GHA_ENABLED sets sccache to use the Github Actions Cache.

To learn more about sccache checkout https://github.com/mozilla/sccache/

Parallel compilation

Rust supports parallel compilation out of the box, which allows you to harness the power of multi-core processors to speed up the build process multiplicatively. To enable parallel compilation, set the codegen-units option in your config.toml.

The codegen-units or the code generation units are the number of parts the code would be divided into to perform compilation on each one of them in parallel which would increase the compilation speed drastically. The downside to this being that the code would not be optimised as well as it could have been if the code wasn’t broken up and compiled piece by piece.

[profile.dev]
codegen-units = 1

Rust config.toml with high codegen-units config

Increasing the number of codegen-units could cause you to miss some potential optimizations but you can optimize for runtime performance by setting the value to 1. This means that the codebase would be considered as a single piece of code and there would be no parallelized compilation.

[profile.release]
codegen-units = 1

Rust config.toml with codegen-units pointing to no parallelization

Profile overrides

The build system in Rust predefined sets of configuration options. These sets are called profiles.

By default, Rust uses different build profiles for different purposes.

Such as the dev profile will be used when building a project during development. This profile prioritizes faster build times and enables debug statements compromising performance. To build with the dev profile, run cargo build in the command line. No flag is required for this command to specify that this is a dev build as this is the default build option.

The release profile is intended to be used when the final version of the application is being released out into the world. So naturally this profile prioritizes speed of the generated binary at the cost of slower compilation times. To build with the release profile, simply use cargo build --release  in the root directory of your project.

These default profiles can be overridden based on the user’s needs by adding configuration to config.toml. For example, to reduce the optimization level for the release profile, check the following:

[profile.release]
opt-level = 2
codegen-units = 16

config.toml

The above configuration reduces the opt-level or the optimization level from 3 (the default) to 2.

The opt-level is a compiler setting that controls the level applied during the optimization process where the level is denoted by a number. The following are the settings and what they mean.

  • opt-level = 0: No optimization. This setting prioritizes fast compilation times, making it suitable for development and debugging and compromising on runtime performance.
  • opt-level = 1: Basic optimization. Provides a balance between compilation speed and runtime performance which are good for incremental builds during development.
  • opt-level = 2: Higher level of optimization. Improves the runtime performance of the generated binary at the cost of slower compilation times which is used for release builds with a little less optimization than the optimal level.
  • opt-level = 3: Highest level of optimization. Focuses on maximizing the performance of the generated binary. Results in significantly slower compilation times with debugging made difficult due to aggressive optimization.

Apart from setting the opt-level, the codegen-units setting is increased to 16, allowing for more parallelization during compilation.

Applying configuration which allows to have faster release builds alongside effective cache utilization in Github Actions

The setup

Consider a project which needs to have a release build optimized for creating ephemeral previews. This build would have to complete faster than the usual Rust release build and not have to be completely optimized allowing for the Rust application binary to be created faster which can then be used to test in the ephemeral environment.

Rust build configuration  

With the above in mind, the optimization level can be reduced and doesn’t have to be the highest so we can set the opt-level or the optimization level to 2 instead of 3(default). Considering that we would like the build to still be a little faster, let’s apply some parallel compilation by setting the codegen-units to 4. This would be a good configuration for ephemeral environment builds but it makes sense to create a custom profile instead.

To create a custom profile let’s add the following to Cargo.toml which will create a new build profile called ephemeral-build with the configuration we need.

[profile.ephemeral-build]
opt-level = 2
codegen-units = 8

Cargo.toml

To use the ephemeral-build profile it will have to be set as the default profile to be used when doing release builds. This can be done by setting the --cfg flag for Rust by exporting the flag and it’s associated value through an environment variable RUSTFLAGS which would be read upon runtime.


RUSTFLAGS=”--cfg profile=ephemeral-build” cargo build --release

Command to run Rust build with custom profile

Dockerfile configuration

The best way to ship an application in the most portable way is through a container image. The following Dockerfile just takes the built application binary. Considering that the caching optimizations were done in Github Actions, the image builder does not have to worry about the build and caching anymore. All that needs to be done is that the binary needs to be copied to the image and the image is ready to go, pretty straightforward.

FROM alpine:latest

RUN apk update --quiet \
&& apk add -q --no-cache libgcc tini curl

COPY target/x86_64-unknown-linux-musl/release/app /bin/app
RUN ln -s /bin/app /app

ENTRYPOINT ["app"]

Dockerfile

In the above configuration after the necessary package updates, only the binary is getting copied and then symlinked for better access.

Github Actions configuration  

All the above configurations can be used together to create a build pipeline for an application image. The build pipeline is optimized by using strategies mentioned earlier in the blogpost and also results in a container image containing the application binary. The following is what the Github Action pipeline looks like:

name: Rust application ephemeral environment build

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout repository
      uses: actions/checkout@v2

    - name: Install Rust
      uses: actions-rs/toolchain@v1
      with:
          toolchain: stable
          override: true
          target: x86_64-unknown-linux-musl

    - name: Configure sccache env var and set build profile to ephemeral build
      run: | 
          echo "RUSTC_WRAPPER=sccache" >> $GITHUB_ENV
	  echo “SCCACHE_GHA_ENABLED=true" >> $GITHUB_ENV
          echo “RUSTFLAGS=’--cfg profile=ephemeral-build’” >> $GITHUB_ENV

    - name: Run sccache-cache
      uses: mozilla-actions/sccache-action@v0.0.2

    - name: Run build
        uses: actions-rs/cargo@v1
        with:
            command: build
            args: --target x86_64-unknown-linux-musl --release

Rust config.toml

  • As the pipeline is initiated above, the first that happens is that the repository is checked out.
  • Rust is installed in the next step. Here, the x86_64-unknown-linux-musl target is being used for the install and build as for our final container image build, the base image we are using is alpine:latest and for our application to run in an alpine container we need to build it to the MUSL target.
  • The necessary environment variables are set
  • RUSTC_WRAPPER for sccache to be used as the Rust compiler wrapper and another one for sccache to use the Github Actions Cache.
  • SCCACHE_GHA_ENABLED for sccache to use the Github Actions Cache.
  • RUSTFLAGS for setting the default profile ephemeral-build to be used when doing a release build.

The above Github Actions setup optimizes a Rust application build specifically for use in ephemeral environments. The final application build is optimized enough to be easy to test with and built fast enough so as to not take too much time between iterations. This is perfect for ephemeral environment builds.

Next Steps: Create an ephemeral environment on every pull request for your Rust application

The output of the pipeline above is an image which is optimized for an ephemeral environment setup. This pipeline can be extended to create ephemeral environments for every pull request using Uffizzi. This blog post explains how one can trigger Uffizzi Ephemeral Environments from Github Actions while utilizing your existing image build. If there are issues regarding setting this pipeline up, reach out to Uffizzi, we are always here to help you out !

Uffizzi logo
Environments as a Service
Learn More
preview icon
Empower your devs with an environment for every pull request
time icon
Avoid release delays that cost millions in missed revenue
Velocity icon
Improve development velocity at scale by up to 50%
Lifecycle icon
Plan capacity in real time with lifecycle management built in
Learn More