Need/Problem
Teams and developers make decisions that might have an impact on their projects’ metrics. For example, changing CI providers or setting a build setting across projects. While they can see the impact through our dashboard, and eventually be notified about changes in trends (e.g. in cache effectiveness), they don’t have a mechanism to record and visualize what decisions are made and when, making it harder to establish correlations.
Motivation
I’ve noticed solutions like error tracking platforms have such primitives, often called markers or deploy markers, which are commonly used to flag a deployment or release of a new version of the software and correlate error trends to them. AppSignal, Sentry, and Datadog all offer similar features. I think this idea would be useful in the context of project health since there are many decisions that can impact the metrics we observe:
- A new version of the build system
- New hardware rolled out to developers
- A new build configuration
Detailed design
We can start simple and non-opinionated by introducing the concept of markers. A marker is an event bound to a project with a date and title associated with it.
I’d suggest that we create three CRUD interfaces for adding them:
- REST API:
/api/markers - CLI:
tuist markers list/create/update/remove - Dashboard:
- Add a marker
- Remove a marker
- See the markers in the graphs
Drawbacks
One drawback I can think of is that being weakly opinionated here can lead to a mess of markers for every decision, not just the significant ones, with teams thinking that everything could impact their metrics. As a result, graphs would end up being cluttered.
We can perhaps start with a weakly opinionated model and then iterate with some opinions once we have a better understanding of the use cases. For example, in error tracking, marking a deploy is kind of an obvious one. Alternatively, we can gather feedback and come up with marker types that serve as guidance.
Alternatives
We could have a system that detects these things automatically. While this is something we could detect (for example, the first time a new version of Xcode is detected), I think automatic detection should be an extension of the implementation rather than the default. Once we have the primitive, we can add an “auto-marking” feature where we try to be smart about detection. AppSignal and others do this too with deployments since you include the commit as a piece of information when reporting errors.
Adoption strategy
I expect most developers to use this through the dashboard initially, but with the CLI interface in place and more developers using coding agents, I expect agents to eventually suggest the creation of markers through the CLI. We’ll write docs and a blog post introducing the concept so developers start getting familiar with it.
How we teach this
As mentioned above, we can put blog posts, videos, and docs out there for people to get familiar with the idea. We can also connect it with the same concept in error tracking platforms so that developers can learn incrementally from an idea they are already familiar with.
Cross-platform availability
This feature can be available across all platforms that Tuist supports. While the examples above reference Xcode, Android teams that end up using Tuist can leverage markers in exactly the same way to track decisions like Gradle version updates or changes to their build configuration.