Stop using focus when testing

Context

When running the tuist test command, a Workspace is generated with a focus on the targets that will participate in the build process.

This approach has several drawbacks:

  • Inability to run tests in parallel for multiple platforms.
  • Incorrect behavior when trying to explicitly pass a list of test targets via the --test-targets argument

1. Parallel testing

It is quite convenient to implement parallel execution of pipeline steps independent of each other on CI.

One of such scenarios is the deployment of cross-platform applications (for example, iOS and tvOS).

Currently, I use fastlane scan for testing and it works.

But I would like to switch to tuist test, which allows running tests selectively. Because of that, calling the command re-generates the project’s xcodeproj files - this can lead to a race condition and conflicts in the build processes.

2. Manually passing the list of targets

I have a script that allows generating XCTestPlan in which all unit-test targets that are in the workspace are, using gem xcodeproj.

I am thinking of abandoning the testPlan generation and passing the entire list via the --test-targets flag, but I encountered unexpected behavior (I don’t know, maybe it’s a bug):

  1. At the first launch, everything works correctly
  2. At the second launch, focusing occurs, excluding test targets on which tests were previously launched.

This results in an error:

The following targets were not found: A, B, C

Deleting the ~/.cache/tuist/SelectiveTests folder fixes the error, but deletes the cache :confused:

This behavior seems contradictory and requires (in the current situation) a premature error: --test-targets cannot be used together with --selective-testing

Proposed solution

Do not modify the initial project graph that was generated by the tuist generate command. In theory, this should solve both problems.

The second problem, with running all tests, however, can be solved by modifying the CLI, allowing you to run tests with a filter on the platform:
There is already an argument --platform that specifies the platform for which the scheme should be built. You can try to extend the behavior as follows:
if the scheme, testing-targets, test-plan are not passed, then the platform parameter will be used as a filter for all workspace test-targets.

The selective testing functionality can’t work without regenerating the project, so this not something we can do, I believe.

Inability to run tests in parallel for multiple platforms.

What if we added that feature in Tuist? Something akin to tuist test --platforms iOS macOS --run-in-parallel. We’d generate the project once to have targets for both platforms and then execute them in parallel.

Incorrect behavior when trying to explicitly pass a list of test targets via the --test-targets argument

Can you flag this as an issue? This is a bug and is separate from the problem of running tests in parallel.

Having studied the source code, I finally understood why this is not possible.
Sadly (

It looks quite cumbersome for CLI, as there may be scenarios for running several specific schemes, and not all tests.

I think it is worth considering the possibility of creating test plans at the manifest level.
For example, in Config.swift you can add a parameter allowing you to describe a list of test configurations with unique names. In the test configuration you will need to specify which test entities should be launched:

import ProjectDescription

let config = Config(
    testConfigurations: [
        // Define test configuration for iOS platform
        .testConfiguration(
            name: "iOS Unit Tests",
            testActions: [
                // Specific test targets
                .target(name: "TestTargetName", platform: .iOS),
                // Specific schemes
                .scheme(name: "Application Scheme Name", platform: .iOS),
            ]
        ),

        // Define test configuration for tvOS
        .testConfiguration(
            name: "tvOS Unit Tests",
            testActions: [
                // Specific test plans
                .testPlan(name: "Test Plan Name", platform: .tvOS),
                // All UnitTests targets of project
                .project(name: "ProjectName", excludeUITests: true platform: .tvOS),
                // All UnitTests targets of workspace
                .workspace(excludeUITests: true,  excludeTargets: [...], excludeProjects: [...], platform: .tvOS)
            ]
        ),

        // Combine multiple test configurations
        .testConfiguration(
            name: "All Unit Tests",
            testActions: [
                .testConfiguration(name: "iOS Unit Tests"),
                .testConfiguration(name: "tvOS Unit Tests")
            ]
        )
    ]
)

Next, you just need to call the command:

tuist test --configuration "All Unit Tests"

This command should

  1. extract and validate the list of all test actions
  2. generate a project graph:
  • For all .testTarget(name:) create one common scheme (per platform)
  • For .scheme(name:) use the scheme itself
  • For each .testPlan(name:) create its own scheme
  • For each .project/.workspace create its own scheme

After generating the graph run tests for all schemes.

  • Schemes within one platform should be executed sequentially
  • Schemes of different platforms should be executed in parallel

Done. Tuist Test: targets not found with `test-targets` arguments · Issue #6942 · tuist/tuist · GitHub

@marekfort Hi! What do you think about this proposal?

Considering grasping the ideas behind “configurations,” “targets,” and “schemes” is already non-trivial, I’d refrain from adding to that with “test configuration”.

@marekfort is the test destination used when hashing?

is the test destination used when hashing?

No, it is not – but I’d say it should.

Considering grasping the ideas behind “configurations,” “targets,” and “schemes” is already non-trivial, I’d refrain from adding to that with “test configuration”.

I’m also leaning not to add another concept users would need to learn.

I understand the limitations of the current setup, so I’d propose taking incremental steps to make things better:

  • Allow specifying multiple destinations and/or platforms for tuist test
  • Enable running tests in parallel across multiple destinations

Even with those two implemented, you won’t be able to run a different set of unit tests for different platforms in parallel. I don’t have a good answer there right now – we might want to live with that limitation. I’d think such a scenario is mostly run from the CI – in which case, why not parallelize across machines instead of trying to parallelize on a single one?

What if we try to create temporary separate .xcworkspace for commands like tuist build and tuist test? Will this allow us to bypass the limitations of the current implementation?

If I understand correctly, selective test execution edits the list of targets in the scheme declared in Workspace. This causes limitations with parallel execution for two platforms, since the scheme is common, but the contents must be different.

Or, for example, another approach: create temporary unique schemes in xcworksace for each run of the tuist build/test command and delete this scheme after command completion.

:warning: By the way: I also noticed that the current implementation of generating a “all-in” workspace scheme is not working for projects with multiple platforms, since it can sometimes lead to errors during compilation like duplicated tasks|output|etc depending on the settings of some targets

I wonder if this can be a test plan itself. If I remember correctly, we refrained from creating another project/workspace because that’d be a completely different project for Xcode’s build system, and therefore, builds wouldn’t be incremental from previous artifacts in derived data.

Interesting! We haven’t given much thought to what selective testing means in this context. When we flag a test target as “succeeded” and bind that to a hash, we assume you are always using the same destination–a model that falls apart in a multi-platform testing scenario.

Can you invoke xcodebuild passing the destination to use for each tests target? Or is every xcodebuild invocation bound to a destination?