Introduction

cachepot is a fork of sccache.

The purpose of this fork is to introduce advanced security concepts to avoid certain attack scenarios as well as preventing bitrot.

It will focus on the distributed compile cache mode, but will also support the current client only mode.

Scope

The scope of this document is to explain the status quo, define security goals and will eventually exist as the reference documentation, once major milestones are achieved.

cachepot distributed compilation quickstart

This is a quick start guide to getting distributed compilation working with cachepot. This guide primarily covers Linux clients. macOS and Windows clients are supported but have seen significantly less testing.

Get cachepot binaries

Either download pre-built cachepot binaries (not currently available), or build cachepot locally with the dist-client and dist-worker features enabled:

cargo build --release --features="dist-client dist-worker"


The target/release/cachepot binary will be used on the client, and the target/release/cachepot-dist binary will be used on the scheduler and build worker.

If you're only planning to use the client, it is enabled by default, so just cargo install cachepot should do the trick.

Configure a scheduler

If you're adding a worker to a cluster that has already be set up, skip ahead to configuring a build worker.

The scheduler is a daemon that manages compile request from clients and parcels them out to build workers. You only need one of these per cachepot setup. Currently only Linux is supported for running the scheduler.

Create a scheduler.conf file to configure client/worker authentication. A minimal example looks like:

# The socket address the scheduler will listen on. It's strongly recommended
# to listen on localhost and put a HTTPS worker in front of it.

[client_auth]
type = "token"
token = "my client token"

[worker_auth]
type = "jwt_hs256"
secret_key = "my secret key"


Mozilla build workers will typically require clients to be authenticated with the Mozilla identity system.

To configure for scheduler for this, the client_auth section should be as follows so any client tokens are validated with the Mozilla service:

[client_auth]
type = "mozilla"
required_groups = ["group_name"]


Where group_name is a Mozilla LDAP group. Users will be required to belong to this group to successfully authenticate with the scheduler.

Start the scheduler by running:

cachepot-dist scheduler --config scheduler.conf


Like the local worker, the scheduler process will daemonize itself unless CACHEPOT_NO_DAEMON=1 is set. If the scheduler fails to start you may need to set RUST_LOG=trace when starting it to get useful diagnostics (or to get less noisy logs: RUST_LOG=cachepot=trace,cachepot-dist=trace ).

Configure a build worker

A build worker communicates with the scheduler and executes compiles requested by clients. Only Linux is supported for running a build worker, but executing cross-compile requests from macOS/Windows clients is supported.

The build worker requires bubblewrap to sandbox execution, at least version 0.3.0. Verify your version of bubblewrap before attempting to run the worker. On Ubuntu 18.10+ you can apt install bubblewrap to install it. If you build from source you will need to first install your distro's equivalent of the libcap-dev package.

Create a worker.conf file to configure authentication, storage locations, network addresses and the path to bubblewrap. A minimal example looks like:

# This is where client toolchains will be stored.
cache_dir = "/tmp/toolchains"
# The maximum size of the toolchain cache, in bytes.
# If unspecified the default is 10GB.
# toolchain_cache_size = 10737418240
# A public IP address and port that clients will use to connect to this builder.
# The URL used to connect to the scheduler (should use https, given an ideal
# setup of a HTTPS worker in front of the scheduler)
scheduler_url = "https://192.168.1.1"

[builder]
type = "overlay"
# The directory under which a sandboxed filesystem will be created for builds.
build_dir = "/tmp/build"
# The path to the bubblewrap version 0.3.0+ bwrap binary.
bwrap_path = "/usr/bin/bwrap"

[scheduler_auth]
type = "jwt_token"
# This will be generated by the generate-jwt-hs256-worker-token command or
# provided by an administrator of the cachepot cluster.
token = "my worker's token"


Due to bubblewrap requirements currently the build worker must be run as root. Start the build worker by running:

sudo cachepot-dist worker --config worker.conf


As with the scheduler, if the build worker fails to start you may need to set RUST_LOG=trace to get useful diagnostics. (or to get less noisy logs: RUST_LOG=cachepot=trace,cachepot-dist=trace ).

Configure a client

A client uses cachepot to wrap compile commands, communicates with the scheduler to find available build workers, and communicates with build workers to execute the compiles and receive the results.

Clients that are not targeting linux64 require the icecc-create-env script or should be provided with an archive. icecc-create-env is part of icecream for packaging toolchains. You can install icecream to get this script (apt install icecc on Ubuntu), or download it from the git repository and place it in your PATH: curl https://raw.githubusercontent.com/icecc/icecream/master/client/icecc-create-env.in > icecc-create-env && chmod +x icecc-create-env. See using custom toolchains.

Create a client config file in ~/.config/cachepot/config (on Linux), ~/Library/Application Support/Parity.cachepot/config (on macOS), or %APPDATA%\Parity\cachepot\config\config (on Windows). A minimal example looks like:

[dist]
# The URL used to connect to the scheduler (should use https, given an ideal
# setup of a HTTPS worker in front of the scheduler)
scheduler_url = "https://192.168.1.1"
# Used for mapping local toolchains to remote cross-compile toolchains. Empty in
# this example where the client and build worker are both Linux.
toolchains = []
# Size of the local toolchain cache, in bytes (5GB here, 10GB if unspecified).
toolchain_cache_size = 5368709120

[dist.auth]
type = "token"
# This should match the client_auth section of the scheduler config.
token = "my client token"


Clients using Mozilla build workers should configure their dist.auth section as follows:

[dist.auth]
type = "mozilla"


And retrieve a token from the Mozilla identity service by running cachepot --dist-auth and following the instructions. Completing this process will retrieve and cache a token valid for 7 days.

Make sure to run cachepot --stop-coordinator and cachepot --start-coordinator if cachepot was running before changing the configuration.

You can check the status with cachepot --dist-status, it should say something like:

TODO

TODO

TODO

Threat model

By definition, PRs can contain arbitrary code. With the rust ecosystem it's common to have custom code in the form of proc_macros being run as part of the compilation process. As a consequence, there must be measures taken to avoid fallout.

Assumptions

A single rust invocation does not require any kind of internet access. This precludes any proc_macros that implement and web or socket based queries from working with cachepot.

Goals

make the build server to securely and fast provide build artifacts, if possible increase the possibility of caching computations with security precautions. The goal of cachepot is to provide a secure compilation and artifact caching system, where a set of inputs is derived from a compiler invocation (i.e. rustc) and computed on the remote worker. The crucial part here is to provide a robust mapping from those input sets to cached compile artifacts in an efficient manner.

Guarantees

For a given set of inputs, user should get the appropriate cached artifact that was created by an equivalent commandlind invocation of the compiler minus some path prefix changes.

Sandbox

The rustc invocation on the cachepot server must never have access to the host environment or storage.

Current

Built-in support for bubblewrap (with the binary bwrap) and docker. bubblewrap is the prefered choice.

Hardening

Future considerations include adding a KVM based sandboxing for further hardening i.e. Quark, katacontainers, or firecracker

Cache poisoning

Independence between compiler invocation, such that no invocation of a (potentially malicious) invocation can lead to delivering incorrect artifacts. It must be impossible to modify existing artifacts.

TODO

Hardening

Assure the hash is verified on the server side, such that the client has no power over the hash calculation.

TODO

Container poisoning

Proper measures should be introduced to prevent containers to be poisoned between runs.

Current Measure

Use overlay fs with bubblewarp or and ephemeral containers with docker. Containers as such or their storage are never re-used.

While we attempt to upstream as much as possible back to sccache, there is no guarantee that the changes we make are also appropriate for the upstream which is used for the firefox builds and might have different requirements.

Priorities

1. Linux x86-64 first
2. Make paritytech/substrate and paritytech/polkadot work
3. Investigate performance bottlenecks

Linux x86-64 first

Most machines running as servers are x86_64 Linux machines today. Clients might be Mac or Windows. We are focusing on Linux at the beginning and try to not break the existing support for Mac and Windows on the client side. The server side will stay x86_64 Linux only, cross compilation is supported by (cross -)toolchains.

Performance Bottlenecks

The lookup keys are based on hashes includes timestamps and paths, as such re-usability of cache vars is very limited. This is a performance limitation, since the cache is ultimately not shared. There are of course various other performance topics that will be addressed but are not necessarily part of this priority item.

The biggest topic that has yet to be specified in detail, is the introduction of multi layer caches, with different trust levels. I.e. a CI cluster could warm caches every night with trusted storage. These could then be used to fetch artifacts combined with a per-user cache for local repeated compiles.

Available Configuration Options

file

[dist]
# where to find the scheduler
scheduler_url = "http://1.2.3.4:10600"
# a set of prepackaged toolchains
toolchains = []
# the maximum size of the toolchain cache in bytes
toolchain_cache_size = 5368709120
cache_dir = "/home/user/.cache/cachepot-dist-client"

[dist.auth]
type = "token"
token = "secrettoken"

#[cache.azure]
# does not work as it appears

[cache.disk]
dir = "/tmp/.cache/cachepot"
size = 7516192768 # 7 GiBytes

[cache.gcs]
# optional url
url = "..."
cred_path = "/psst/secret/cred"
bucket = "bucket"

[cache.memcached]
url = "..."

[cache.redis]
url = "redis://user:passwd@1.2.3.4:6379/1"

[cache.s3]
bucket = "name"
endpoint = "s3-us-east-1.amazonaws.com"
use_ssl = true


env

Whatever is set by a file based configuration, it is overruled by the env configuration variables

misc

• CACHEPOT_ALLOW_CORE_DUMPS to enable core dumps by the server
• CACHEPOT_CONF configuration file path
• CACHEPOT_CACHED_CONF
• CACHEPOT_IDLE_TIMEOUT how long the local daemon process waits for more client requests before exiting
• CACHEPOT_STARTUP_NOTIFY specify a path to a socket which will be used for server completion notification
• CACHEPOT_MAX_FRAME_LENGTH how much data can be transfered between client and server
• CACHEPOT_NO_DAEMON set to 1 to disable putting the server to the background

cache configs

disk

• CACHEPOT_DIR local on disk artifact cache directory
• CACHEPOT_CACHE_SIZE maximum size of the local on disk cache i.e. 10G

s3 compatible

• CACHEPOT_BUCKET s3 bucket to be used
• CACHEPOT_ENDPOINT s3 endpoint
• CACHEPOT_REGION s3 region
• CACHEPOT_S3_USE_SSL s3 endpoint requires TLS, set this to true

The endpoint used then becomes \${CACHEPOT_BUCKET}.s3-{CACHEPOT_REGION}.amazonaws.com. If CACHEPOT_REGION is undefined, it will default to us-east-1.

redis

• CACHEPOT_REDIS full redis url, including auth and access token/passwd

The full url appears then as redis://user:passwd@1.2.3.4:6379/1.

memcached

• CACHEPOT_MEMCACHED memcached url

gcs

• CACHEPOT_GCS_BUCKET
• CACHEPOT_GCS_CREDENTIALS_URL
• CACHEPOT_GCS_KEY_PATH
• CACHEPOT_GCS_RW_MODE

azure

• CACHEPOT_AZURE_CONNECTION_STRING

FAQ

Q: Why not bazel? A: Bazel makes a few very opinionated assumptions such as hermetic builds being a given, which is a good property in general but non-trivial to achieve for now. There is another issue regarding the fact that bazel is very dominant. It assumes it’s the entry tool and we want to stick with cargo while maintaining the option to plug in sccache/cachepot.

Q: Why not buildbarn? A: It is the backend caching infra for bazel.

Q: Why not synchronicty? A: It’s in a very early, experimental stage and uses components with low activity and low community involvement.