Happy new year everyone! ![]()
2026 is just getting started, and we have an ambitious year ahead. If you haven’t read our end of the year 2025 blog post, I recommend to give it a shot. As you know, we work in 6-week cycles and the first one just got started, so I’d like to share with you what we plan to complete at the end of it. The cycle will run from January 5th to February 15th.
Cache Infrastructure Evolution
Behind the scenes, @cschmatzler is driving several improvements to how our cache infrastructure works.
- Registry migration to cache nodes: Currently, registry packages are streamed through our main servers, which can add latency and occasionally causes timeouts for larger packages. By moving this to the same cache node infrastructure we use for module binaries, downloads become faster and more reliable, especially for teams with cache nodes deployed closer to their CI runners.
- Module cache node productization: We’ve been running self-hosted cache nodes for several customers, but the setup process has been somewhat bespoke. We’re investing in making this a proper, documented, self-serve product. For enterprise teams with strict data locality requirements or who want cache nodes in the same network as their runners, this is a significant unlock.
Why we’re doing this now: As we scale to larger customers (teams running thousands of builds per day), infrastructure performance becomes a differentiator. A cache that takes 30 seconds to download defeats the purpose of caching. We’re also seeing more enterprise procurement conversations where self-hosted options aren’t just nice-to-have, they’re requirements. Building this out properly positions us for the deals we’re working on and the ones coming down the pipeline.
At the end of the day, you can’t beat physics. The closer your cache is to where your builds run, the faster those builds complete. We’re thinking a lot about what that means for the future of our infrastructure, and this cycle’s work lays the groundwork for where we’re heading.
Android Cache
This is a big one. We’re officially extending our cache infrastructure to support Gradle builds, marking Tuist’s first step beyond the Apple ecosystem.
Why now? Over the past year, we’ve had countless conversations with engineering leaders who use Tuist for iOS and ask the same question: “Can you do this for our Android builds too?” Many companies have separate iOS and Android platform teams facing the exact same problems: slow builds, wasted CI minutes, frustrated developers. Yet they’re forced to use completely different tooling for each platform.
The Gradle ecosystem has Develocity (formerly Gradle Enterprise), but it comes with enterprise pricing that puts it out of reach for many teams. We believe there’s an opportunity to bring the same pragmatic, developer-friendly approach we’ve taken with iOS caching to the Android world. Our cache node infrastructure is protocol-agnostic at its core. We’ve designed it to handle artifact storage efficiently, and Gradle’s remote build cache protocol is well-documented.
For teams, this means unified build infrastructure across platforms. One dashboard to monitor build health, one set of cache nodes to manage, one vendor relationship. For mobile organizations, this simplifies operations significantly and gives platform teams a common language around build performance.
We’re also taking this opportunity to revisit our documentation structure using the Diataxis framework. As we expand to serve both iOS and Android developers, we need docs that are organized by what you’re trying to accomplish, not by which platform you’re on. This sets us up to scale documentation as we grow.
Flakiness Detection
If you’ve worked on a mobile codebase of any reasonable size, you know the pain of flaky tests. That one test that fails on CI but passes locally. The snapshot test that breaks depending on the simulator. The integration test that times out once every twenty runs. Teams develop learned helplessness: they stop trusting their test suites, they hit “re-run” reflexively, and eventually they stop writing certain kinds of tests altogether.
Why this matters for us: We already collect test results through our insights pipeline. We see every test run, every pass, every failure. We have the data; we just haven’t been surfacing the patterns. This cycle, @marekfort is building the server-side logic to automatically identify tests that exhibit flaky behavior: tests that both pass and fail for the same code, tests with unusually high variance in duration, tests that fail disproportionately on certain CI configurations.
The UI is already in place from previous work; now we’re connecting it to real detection algorithms. Once live, you’ll be able to see which tests are dragging down your team’s productivity and confidence. You can quarantine them, fix them, or at least know to ignore their failures during code review.
From a competitive standpoint, other companies already offers flaky test detection, and it’s something teams actively evaluate when choosing tooling. For Tuist to be a credible player in the build insights space, this is table stakes. But more importantly, it’s the right thing to build. It directly addresses one of the most frustrating aspects of automation.
Feature Pages & Landing Page Redesign
Here’s a problem we keep running into: when we write a blog post about, say, selective testing, where do we link to? The docs are comprehensive, but they’re docs. They explain how to use a feature, not why a team should care about it. When an engineering manager is trying to convince their director to invest in build tooling, they don’t want to send them to a technical reference page.
@asmit is building dedicated feature pages for our marketing site: standalone pages for caching, insights, previews, and our upcoming Android support. These pages explain the value proposition, show the UI, include customer quotes and metrics, and make it easy for someone to understand what Tuist offers without diving into documentation.
Why this matters beyond marketing: These pages are SEO gold. When someone searches “iOS build caching” or “Xcode CI optimization,” we want Tuist to show up. But increasingly, discoverability isn’t just about Google. It’s about LLMs. When developers ask ChatGPT or Claude for build tooling recommendations, those models are trained on web content. Having clear, well-structured feature pages improves our chances of being surfaced in those conversations.
This also prepares us for the Android launch. We’ll need a place to announce and explain our Gradle support, and having a template for feature pages means we can ship that quickly and consistently.
The home page feature section is also getting a redesign. As we add more capabilities, we risk the page becoming an endless scroll. We’re exploring grouped layouts (organizing features into logical areas like “Build Acceleration,” “Quality Insights,” and “Developer Experience”) so visitors can quickly understand the breadth of what Tuist offers without being overwhelmed.
That’s the cycle. It’s ambitious, maybe too ambitious, but that’s how we like to operate
. Some of this will slip, some will expand, and we’ll learn things along the way that change our priorities. That’s the nature of building a product.
As always, if you have feedback on any of these initiatives or want to be an early tester for flaky test detection, drop a message in the community Slack. We build this with you, not just for you.