Marketing video: Introducing MCP

We’ll start making videos to present new platform capabilities, and as part of that we thought why not iterating on those in public so others can participate and also have a peek into our creative process? Here’s a first example for a video we’d like to create to announce our new MCP server.

Tuist MCP Server - 20s Announcement Video Script


Structure Overview

Timecode Purpose
0s - 2s Hook - grab attention immediately
2s - 6s Announcement - MCP is live, here’s what it is
6s - 16s Use cases - concrete examples in rapid succession
16s - 20s Sign-off - tagline and logo

Script + Visuals


[0s - 2s] - Hook

Line:

“Your team ships every day. But nobody sees the full picture.”

Visual: A dark terminal window. Cursor blinking. No prompt yet. Just silence. The terminal feels like something is waiting to be asked.


[2s - 6s] - Announcement

Line:

“Tuist now ships an MCP server - connecting your project’s insights directly to your coding agent.”

Visual: The words “Tuist MCP” appear at the top of the terminal. Then a tool list streams in, like an agent loading its context:

> tuist_get_builds
> tuist_get_test_runs
> tuist_get_bundle_size
> tuist_get_trends
...

The agent is now armed. Short beat.


[6s - 9s] - Use Case 1: Build Regression

Line:

“Ask why your build regressed on Tuesday.”

Visual: Terminal. User types:

Why did my build get slower on Tuesday?

Agent responds, streaming in:

Build time increased by 38s on Tuesday.
Cause: PaymentsTarget was added as a dependency
of CoreTarget, creating a parallelization
contention point.

Would you like me to restructure
your architecture to fix this?

[9s - 12s] - Use Case 2: Flaky Tests

Line:

“Find your flakiest tests - and a fix.”

Visual: Terminal. User types:

Which tests are flaky this week?

Agent responds:

Top flaky tests this week:
1. AuthTokenRefreshTests - fails 34% of runs
2. CheckoutFlowUITests  - fails 21% of runs
3. UserSessionTests     - fails 18% of runs

Working on a fix for AuthTokenRefreshTests...
[===========>      ] Analysing test history
[==================] Done

Likely cause: race condition on async teardown.
Opening a PR with suggested fix.

[12s - 15s] - Use Case 3: Bundle Bloat

Line:

“Spot what’s bloating your bundle before it ships.”

Visual: Terminal. User types:

What's grown the most in our bundle this sprint?

Agent responds:

Bundle size increased by 43% this sprint.
Found 12MB of duplicated images across
FeatureA and FeatureB targets.

Working on a fix...
[===========>      ] Scanning asset catalogs
[==================] Done

Moving shared assets to CommonAssets target.
Opening a PR with the changes.

[15s - 17s] - Use Case 4: Trends

Line:

“Or just ask how things are going - and get a real answer.”

Visual: Terminal. User types:

How is the project looking lately?

Agent responds:

- Build time up 12%
- Bundle size stable
- Number of modules increased by 3
- Flaky test rate down 8%

[17s - 20s] - Sign-off

Line:

“Tuist MCP. Less guessing, more shipping.”

Visual: Terminal fades. White Tuist logo centered on dark background. Tagline appears beneath: “Less guessing, more shipping.” Hold for one beat. Cut.


Notes for the Designer

  • Aesthetic: Everything lives inside a terminal. Dark background, monospace font, streaming text. No charts, no dashboards, no UI chrome.
  • Streaming effect: Agent responses should animate in line by line, like a real LLM streaming output. This is the key motion in the video.
  • Pacing: Each use case is a two-beat punch - user prompt appears, then agent response streams in fast. Cut before it feels slow.
  • Hook visual: The blinking cursor with no prompt yet is doing a lot of work. Give it a moment to breathe before anything appears.
  • Agent interface: Keep the terminal generic - no specific tool branding. It should read as “your agent, whichever one you use.”
  • Text on screen: The spoken lines can double as on-screen captions or supers, especially for silent autoplay on LinkedIn/X.