We’ve invested in building cache infrastructure designed so teams can optimize latency by bringing it closer to their compute. The infrastructure was purpose-built for one use case: optimizing the compilation of build systems by reusing artifacts from previous builds. That said, the primitives we’ve created are broadly useful, and it’s worth asking whether we should extend the interface so developers can plug into it from other contexts.
Below are some ideas, in no particular order, that we might want to explore.
Caching Through Decoration
The idea of reusing artifacts from previous workflows is not exclusive to build systems. Other use cases, such as dependency resolution, could benefit from the same optimization, ideally with a solution that works consistently across environments.
Drawing inspiration from Mise’s pattern of adding annotations to scripts (for example, to declare a CLI and get arguments parsed and validated), we can bring caching capabilities to scripted workflows. Take the following script to resolve dependencies:
#!/usr/bin/env tuist exec bash
#CACHE input "Package.resolved" hash=content
#CACHE output ".build/"
tuist install
Using a shebang and structured comments, developers can bring caching to their existing scripts. In the example above, we hash the input and resolve the output artifacts through our cache infrastructure. This works for any scriptable runtime. For example, in Ruby:
#!/usr/bin/env tuist exec ruby
#CACHE input "Package.resolved" hash=content
#CACHE output ".build/"
sh("tuist install")
GitHub Actions has a similar concept with its cache actions, but it is tightly coupled to that platform, making it difficult to reuse artifacts in other environments. Nx also has comparable capabilities, but it requires users to declare an automation graph using its DSL. Bringing caching closer to scripts through decoration is a smoother integration: it requires only a few comments in existing scripts and perhaps some light refactoring to make those scripts more atomic.
Portability. The portability of this approach depends on how much business logic is baked into the script. A script that just runs
tuist installis highly portable; one that encodes project-specific paths, environment assumptions, or conditional logic is not. Portability is a property of the script, not of the mechanism. That said, the decoration interface at least makes the caching intent explicit and environment-agnostic, which is an improvement over solutions that hard-code caching logic inside CI platform configuration.
Mise integration. Mise has tasks that activate the right tools and add features like argument parsing via usage. Our solution would complement this nicely: adding
tuistas a Mise dependency and making a few script changes is all that’s needed to unlock caching superpowers for projects already using Mise. This is a strong basis for a marketing message, since many projects use Mise tasks today.
Caching Through a Shell-Based Runtime API
Decoration solves a lot of use cases, but some scripts need access to caching primitives at runtime, for example to implement control flow based on whether a cached result exists. For this, we can expose an interface through the CLI so any script can shell out to it and parse the exit code to drive logic.
# Key-value store operations
tuist cas keys list # list recent key-value mappings
tuist cas keys get <key> # look up what a key resolves to
tuist cas keys set <key> <value> # create/update a mapping
tuist cas keys delete <key> # remove a mapping
$ tuist cas keys get a3f9c1e
Key: a3f9c1e8b2d4f6a8
Value: d8b2f1a4c6e8b0d2
Created: 2026-03-09 13:45:00
# Artifact (blob) operations
tuist cas artifacts get <hash> # inspect metadata
tuist cas artifacts download <hash> <path> # download blob to a local path
tuist cas artifacts push <path> # upload content, returns hash
tuist cas artifacts delete <hash> # remove
tuist cas artifacts list # list stored artifacts
$ tuist cas artifacts get d8b2f1a
Hash: d8b2f1a4c6e8b0d2
Size: 12.4 MB
Type: xcframework
Name: TuistKit
Stored: 2026-03-09 13:45:00
Here is a concrete example: building DocC documentation for a large target can take over 10 minutes. With this API, we can turn that CPU-bound operation into a network round-trip when the inputs haven’t changed:
#!/bin/bash
INPUT_HASH=$(cat $(find Sources/TuistKit -name "*.swift" | sort) Sources/TuistKit/TuistKit.docc/**/* | shasum | cut -d' ' -f1)
RESULT=$(tuist cas keys get "$INPUT_HASH" --json 2>/dev/null)
if [ $? -eq 0 ]; then
ARTIFACT_HASH=$(echo "$RESULT" | jq -r '.value')
tuist cas artifacts download "$ARTIFACT_HASH" docs.doccarchive.tar.gz
tar xzf docs.doccarchive.tar.gz
rm docs.doccarchive.tar.gz
echo "Restored docs from cache."
else
xcodebuild docbuild \
-workspace Tuist.xcworkspace \
-scheme TuistKit \
-derivedDataPath .build/
tar czf docs.doccarchive.tar.gz .build/Build/Products/Debug/TuistKit.doccarchive
ARTIFACT_HASH=$(tuist cas artifacts push docs.doccarchive.tar.gz --json | jq -r '.hash')
tuist cas keys set "$INPUT_HASH" "$ARTIFACT_HASH"
rm docs.doccarchive.tar.gz
echo "Docs built and cached."
fi
A less obvious but important benefit of this API is that it lets teams decouple skip logic from their CI pipelines. Today, a lot of “skip this job if these files haven’t changed” logic is implemented using platform-specific features: GitHub Actions path filters, custom hash-and-compare steps, or bespoke shell scripts embedded in YAML. That logic is hard to test, hard to reuse across pipelines, and completely lost when a team switches CI providers. With a runtime cache API, the same skip logic can live in a plain script that runs identically on any machine. The optimization is no longer a CI concern; it becomes part of the workflow itself.
Caching Through Native Bindings
This is a longer-term direction. Once the shell-based API has matured, we can consider providing native bindings for popular runtimes so teams can build tighter integrations without needing system processes or a separately installed CLI. The DocC example above would become something like this with Node.js bindings:
import { cas } from "@tuist/cas";
import { hash } from "@tuist/cas/hash";
import { exec } from "node:child_process";
const inputHash = await hash.files([
"Sources/TuistKit/**/*.swift",
"Sources/TuistKit/TuistKit.docc/**/*",
]);
const entry = await cas.keys.get(inputHash);
if (entry) {
await cas.artifacts.download(entry.value, ".build/TuistKit.doccarchive", {
extract: true,
});
console.log("Restored docs from cache.");
} else {
exec("xcodebuild docbuild -workspace Tuist.xcworkspace -scheme TuistKit -derivedDataPath .build/");
const { hash } = await cas.artifacts.push(".build/Build/Products/Debug/TuistKit.doccarchive");
await cas.keys.set(inputHash, hash);
console.log("Docs built and cached.");
}
Additional Considerations
Telemetry and Observability
If we expand caching beyond build systems, we need to invest in telemetry and UI to match. Teams will need to observe how the cache is being used, purge entries, and understand how usage distributes across different scripts and workflows. The interface changes are only part of the investment; the observability layer needs to keep pace.
A Narrow Waist for Build Infrastructure
Rather than positioning this purely as a Tuist feature, we could frame it as an infrastructure-agnostic interface between projects and their cache backends. This approach has strong precedent: OpenTelemetry, Prometheus, and Kubernetes all succeeded by defining standards that allowed users to choose their own providers.
Concretely, we could influence tools like Mise to treat caching as a first-class concept in their task runner, with Tuist as one of several pluggable backends. This would make Mise a meaningful go-to-market channel for us while benefiting the broader ecosystem. It does require giving up some branding control, but the reach and credibility that comes with being a standards-aligned provider could more than compensate.