Tuist Middleware or Custom Commands

Goal:

We want to run custom linting rules on project structure using tuist graph which will ensure that our dependencies are setup correctly. Certain dependencies are not imported in some particular modules to avoid an issue for downstream consumers etc… There are many other applications (like structuring a white label project for a specific app or generating code etc.)

Proposals:

  1. Middlewares - A tuist manifest awaitable handler that tuist executes before each generate command and the handler returns a success or throws an error. If error, tuist prints it and terminates the process
let project = Project(
   ....,
   middleware: { graph in
           /// We probably also want to import SwiftSyntax or other internal libraries
           await validateGraph(graph)
   } 
)
  1. Custom Tuist Commands - Allow consumers to implement their own custom tuist commands (something like plugins but it doesn’t need to be a separate swift package of itself)

  2. Pre/Post Hooks - At callsite we can can pass the command a custom script hook that the command can call before or after the (generate) command is executed

2 Likes

Hey @shahzadmajeed :wave:

Thanks for bringing this up :slightly_smiling_face:

The properties of Project must be Codable, so passing in a function is a no-go.

As for 2. and 3., we have considered both options in the past, but I personally don’t see a ton of value in having hooks or extending the tuist CLI with custom commands over wrapping tuist and tuist generate in your own scripts that your team would be using.

From the three, pre- and post-generation hooks feel like the best option if we do decide to go ahead with something here, but again, I wonder how much of an improvement that is over having a custom script such as:

# pre-hook
echo "Here do pre-generate work"
tuist generate
# post-hook
echo "Here do post-generate work"

Thanks for quick response Marek.

For scripts/hooks, the question is how do we get access to the graph? Would it be manual via. graph package? Isn’t project generated already if we want to use XcodeGraph?

The graph is returned by tuist graph —json, and the decidable data structure to work with it is available in GitHub - tuist/XcodeGraph: ፨ A Swift Package with data structures to model Xcode workspaces and projects, so you can pipe the two in a Swift script and perform any operations that you want on the graph.

Mutations are not possible though. This form of dynamism is discouraged because makes things unpredictable and that comes with some cascading effects in the Tuist workflows.

It does mean you’d need to load the graph twice. I wonder if we should do the same as we did with tuist share and add a --json option. You could then run tuist generate --json and get the resulting graph as the only output, so you could easily pipe it into further automation scripts.

1 Like

I have implemented a bit of dynamism into the project generation process by generating our own dependency graph. I also think this can be useful in some cases as it makes it faster to fail if the graph is not been setup according to custom project parameters, or for creating custom aggregated schemes (that’s one of the things we do with it). I dont think this is too different from for example running tuist graph --json and passing the results through an environment variable to tuist generate. The difference would be is that I think it all being one step would make it faster and more intuitive? If it were to be part of the generation process then developers would have a guaranteed and standard way to access it.

1 Like

Making validation convenient is a fair point. I’d not go beyond (Graph) -> Error? though, for example, to allow graph mutations, like CocoaPods allowed with hooks.

Enabling this one is a bit involved, but if anyone is up for it, feel free.

  • I propose that a validator is a Swift file with easy access to the graph (e.g., Graph.fromEnv()). Validators can live in a conventional directory structure, and Tuist runs them in order.
  • XcodeGraph can be distributed as a dynamic framework, such that validators can import XcodeGraph.
  • We’ll have to extend the following workflows:
    • Generation will compile and run them to ensure we do some caching.
    • Project editing should include a validator so that people can edit them easily.
2 Likes

I think (Graph) -> Error? makes sense as a first step so that validations can be implemented in the same process we use today.

How do you determine the order? Does that mean Project.swift will have to provide a list of validators?

This would be ideal, we would like to start making certain dependencies illegal in certain targets, so having access to the graph like this for validation would be very convenient

You can start with the directory containing them being conventional, for example Tuist/Validators, and then run them in alphabetical order.

Thoughts @marekfort ?

I think the tuist user should determine the order because sometimes validations are stacked on each other.

I understand the desire to come up with something more scoped, so people don’t do crazy things in their post-generation hooks – but I wonder if it would be more straightforward both from the implementation standpoint and from the user perspective if we added support for post-generation hooks as has been proposed a bunch of times in the past.

The post-generation hook would receive the whole graph as an argument, so you could do your validations. And if there’s a different need than validations, you could do that, too! Obviously, you shouldn’t try to do things like updating files as that would break some functionality like caching. But note that you’d be able to do that with Validators anyway – since validators would be a custom script, you could do anything there and we’d have no direct way to restrict you from that. So, the only reason why validators would be somewhat safer than post-generation hooks is the naming convention … which feels a bit weak to me.

If all validators are on a predefined directory, does that mean they run once the full workspace is generated? Or would be so they run on each project generated. I ask this because I think one of the benefits of Tuist integrating custom validators would be saving time if one of them fails. However if it first has to generate so it can later validate then it wouldn’t be too different from running tuist generate and then running a validator implemented with XcodeProj library.

As I explained earlier, we sort of do validations on our project already by building our own dependency graph and failing if a target is going to be generated with invalid dependencies. This works but is not very convenient. First of all we are obviously doing redundant work by building another graph, although this is somewhat inevitable given it is the only way to get additional data points that Tuist won’t provide (i.e. Tuist doesn’t know the location of the project file being generated at runtime?). The other thing is the failing. We simply call fatalError since there’s apparently any Tuist provided way to fail. So I think a nice addition that might be nkt too hard to implement would be to have a Tuist failing system, similar to how its internal linter works, but designed for custom stuff.

Those validations (or transformations if we go with @marekfort’s suggestion), would run between the loading of the project in memory, and its conversion into Xcode projects and workspaces. You could achieve something like that by implementing a CLI yourself using XcodeGraph (not XcodeProj) that runs after generation, but that’ll slightly increase the generation time, and it won’t work for commands that pre-generate before the command’s action, like tuist cache, which pre-generates the project before warming the cache.

Manifest files don’t have the data points that you mention (e.g. where projects will be generated), but the internal XcodeGraph graph does. Hence why I think the validation shouldn’t happen in manifest files, which should remain as declaration layers.

Would you be interested in giving it a shot? I can give you some pointers to get a first prototype out there so that people can try, and then we iterate from there.