Hi ![]()
Having completed the first cycle of 2026, it’s time to start thinking about 2026C2. Here’s the list of things that we plan to focus on.
Runners 
The more we invested in building our caching infrastructure and the product, the more obvious it became that we’d need to introduce the runner building block into our platform. First, to collocate our caching infrastructure with the compute. Second, to build features with a developer experience that’s only possible if we can bring the compute environment with us. This investment is very aligned with our plans to shift more value-capturing towards infrastructure, so we can open-source most of our software and shape Tuist as the best and most open dev productivity infrastructure in the space.
If you are unhappy with your current CI solution, are about to renew your terms, or are just intrigued by this, send us an email at [email protected] and you’ll be one of the first to have access to this :).
Gradle parity 
If one year ago you’d told us we’d bring cache and build and test insights to Gradle in a couple of weeks, we wouldn’t have believed you. But guess what, turns out our technological decisions and past investments, combined with the capabilities of coding agents, made that possible. We believe we’ll just need a couple of extra weeks to bring bundle insights and previews to Android,d too, and we’ll have full parity. We’ll invest in content marketing and start working with existing customers, giving their Android teams access to get early feedback and surpass the value they might already be getting from services like Develocity. One thing became clear throughout this work. Teams don’t like self-hosting, a model Develocity seems very strict about, and while they present a lot of data, the UI/UX is a bit of a Frankenstein. We think we can and must do better, and build interfaces through our APIs (REST and MCP) and the CLI to turn coding agents into the best productivity partner.
We’ll publish a blog post in the coming days announcing the features and introducing it in a popular open-source app, so you get a better sense of what adoption is like and the value you can get from it.
Bring the Swift registry closer to you 
We built a cache node infrastructure to reduce network latency, enabled it for module, Xcode, and Gradle caches, and in the past cycle, moved our Swift registry to it as well. In this cycle, we’ll test the registry move and roll it out to all our users, so their package resolution is faster than ever. Remember! The registry is available without a Tuist account, so if you are annoyed by your packages taking a long time to resolve and caching across CI environments doesn’t help because you interact with an object storage service, you might consider adopting it. It just takes a few commands.
Test attachments 
As you might know, we collect and present (through the dashboard and APIs) the results of your test runs. Most recently, we extended it to include the crash reports, so you (and your agents) have access to all the information necessary to debug and fix tests. This cycle, we want to take one step further and integrate with the runtime attachment APIs to persist, store, and present attachments. Thanks to this, you can, among other things, debug failing snapshot tests or include attachments in your failing tests that help you understand the execution without having to build infrastructure for that. We provide the infrastructure, you focus on your projects and tests. Deal?
Sharding support 
In our conversations with organizations that work at scale, one thing becomes obvious. They eventually built a sharding solution that runs before project generation or xcodebuild, filtering the graph to compile and test. We want to take that responsibility away from our users and combine it with the data our API has access to, so we can shard dynamically. You’ll have access to sharding strategies that you can configure declaratively, and we’ll take care of the rest.
Closing words
You might have noticed it, but with the expansion to Android, and our step down in the infrastructure with runners, our appetite for becoming the infrastructure for productive agentic development keeps growing, and I strongly believe we are well-equipped to make that happen. We’ve layered our caching system to be layered and completely decoupled from where the compute happens. We’ll soon bring the compute environment and integrate it with your current workflows (CI) and workflows AI will soon enable (fix a bug or build a preview on the go), and we’ll top it up with the right runtime telemetry to turn your coding agents into doctors to progressively turn your setup into the most productive one.
As we expand into new build systems, we’ll take a community approach to building a company and work with existing toolchains to help them advance their capabilities and have the cache and telemetry features necessary for modern agentic development.