Here we present a framework for accelerating enterprise Rust builds to improve feedback loops, collaboration, development frequency and CI/CD costs.
Rust has garnered attention and praise in the programming world due to its combination of safety, speed, concurrency and programmability. It has also proved to be an excellent choice for building enterprise applications. Due to the ease of onboarding for a project compared to languages like C/C++ which are not as memory safe and need years of experience for a developer to get started working, Rust proves to be a better option. It has also been the most loved programming language in the development community in recent years due to the innovative and efficient design choices made for the authors such as zero-cost abstraction and ownership, thus, focusing on the performance but not at the cost of programmability.
Due to the nature of the language design, Rust build/compilation times are quite slow and can hinder developer productivity; introducing slow feedback loops as a direct product of the compilation times. The following xkcd comic sums up the problem with high compilation times and is very relevant in this scenario. The more a developer waits for their code to compile, the less they will be working on the product. This will affect the overall release time and can cause a butterfly effect through the product release process.
To help alleviate this problem, this blog post will present various strategies to optimize Rust build times which then later will be incorporated with GitHub Actions. This will arm the Rust developer to iterate faster and in turn help them work on projects efficiently and effectively.
Slow builds can significantly impact development speed and productivity in several ways. In enterprise projects where time is of the essence this can cause a large impact on the engineering habits of the team, their release cadence and future product planning. The following impacts can be seen if an application build is too slow and doesn’t allow developers to iterate fast:
As seen above, there can only be good coming out of optimizing your Rust application’s build. The following section goes through strategies used to do the same.
The following strategies for optimizing Rust builds come with their own pros and cons. It is upto the user to decide what works best for them in their build use case.
The user needs to think about whether the build they are optimizing for is a developer, release, test or some other build. Figuring out the right combination of build optimizations for each helps the user develop and release smoothly.
Release builds of Rust applications tend to be much slower than the developer builds. This is due to the optimizations done by the compiler to have the smallest application binary possible, during a release build. Conclusively, the user has to be decisive about their individual build optimization choices. The following, are the strategies for building a optimzied build pipeline for Rust applications. These strategies can be used in tandem with each other:
Caching is the most straighforward and also most crucial for speeding up build times. By caching the target directory and cargo registry, you can reduce the time spent on compiling dependencies significantly.
For the above caching configuration, the popular https://github.com/Swatinem/rust-cache github action can be used to ease the process of setting up and using the cache for Rust application builds.
Swatinem Github actions configuration
After the basic dependency caching above is covered, a smarter cache, sccache can be used as a compiler caching tool. It acts as a compiler wrapper and avoids compilation whenever possible. In this case we are ensuring that we are not just caching the dependencies but also the compile time artifacts which do not need to be recompiled on every build.
sccache Github actions configuration
The above set of github actions sets up sccache environment variables where RUSTC_WRAPPER dictates which compiler wrapper is to be used, and SCCACHE_GHA_ENABLED sets sccache to use the Github Actions Cache.
To learn more about sccache checkout https://github.com/mozilla/sccache/
Rust supports parallel compilation out of the box, which allows you to harness the power of multi-core processors to speed up the build process multiplicatively. To enable parallel compilation, set the codegen-units option in your config.toml.
The codegen-units or the code generation units are the number of parts the code would be divided into to perform compilation on each one of them in parallel which would increase the compilation speed drastically. The downside to this being that the code would not be optimised as well as it could have been if the code wasn’t broken up and compiled piece by piece.
Rust config.toml with high codegen-units config
Increasing the number of codegen-units could cause you to miss some potential optimizations but you can optimize for runtime performance by setting the value to 1. This means that the codebase would be considered as a single piece of code and there would be no parallelized compilation.
Rust config.toml with codegen-units pointing to no parallelization
The build system in Rust predefined sets of configuration options. These sets are called profiles.
By default, Rust uses different build profiles for different purposes.
Such as the dev profile will be used when building a project during development. This profile prioritizes faster build times and enables debug statements compromising performance. To build with the dev profile, run cargo build in the command line. No flag is required for this command to specify that this is a dev build as this is the default build option.
The release profile is intended to be used when the final version of the application is being released out into the world. So naturally this profile prioritizes speed of the generated binary at the cost of slower compilation times. To build with the release profile, simply use cargo build --release in the root directory of your project.
These default profiles can be overridden based on the user’s needs by adding configuration to config.toml. For example, to reduce the optimization level for the release profile, check the following:
The above configuration reduces the opt-level or the optimization level from 3 (the default) to 2.
The opt-level is a compiler setting that controls the level applied during the optimization process where the level is denoted by a number. The following are the settings and what they mean.
Apart from setting the opt-level, the codegen-units setting is increased to 16, allowing for more parallelization during compilation.
Consider a project which needs to have a release build optimized for creating ephemeral previews. This build would have to complete faster than the usual Rust release build and not have to be completely optimized allowing for the Rust application binary to be created faster which can then be used to test in the ephemeral environment.
With the above in mind, the optimization level can be reduced and doesn’t have to be the highest so we can set the opt-level or the optimization level to 2 instead of 3(default). Considering that we would like the build to still be a little faster, let’s apply some parallel compilation by setting the codegen-units to 4. This would be a good configuration for ephemeral environment builds but it makes sense to create a custom profile instead.
To create a custom profile let’s add the following to Cargo.toml which will create a new build profile called ephemeral-build with the configuration we need.
To use the ephemeral-build profile it will have to be set as the default profile to be used when doing release builds. This can be done by setting the --cfg flag for Rust by exporting the flag and it’s associated value through an environment variable RUSTFLAGS which would be read upon runtime.
Command to run Rust build with custom profile
The best way to ship an application in the most portable way is through a container image. The following Dockerfile just takes the built application binary. Considering that the caching optimizations were done in Github Actions, the image builder does not have to worry about the build and caching anymore. All that needs to be done is that the binary needs to be copied to the image and the image is ready to go, pretty straightforward.
In the above configuration after the necessary package updates, only the binary is getting copied and then symlinked for better access.
All the above configurations can be used together to create a build pipeline for an application image. The build pipeline is optimized by using strategies mentioned earlier in the blogpost and also results in a container image containing the application binary. The following is what the Github Action pipeline looks like:
The above Github Actions setup optimizes a Rust application build specifically for use in ephemeral environments. The final application build is optimized enough to be easy to test with and built fast enough so as to not take too much time between iterations. This is perfect for ephemeral environment builds.
The output of the pipeline above is an image which is optimized for an ephemeral environment setup. This pipeline can be extended to create ephemeral environments for every pull request using Uffizzi. This blog post explains how one can trigger Uffizzi Ephemeral Environments from Github Actions while utilizing your existing image build. If there are issues regarding setting this pipeline up, reach out to Uffizzi, we are always here to help you out !