Let your AI agent generate App Store screenshots as a native tool. Works with Claude Code, Cursor, Windsurf, and any MCP-compatible client.
The Model Context Protocol is an open standard that lets AI agents use external tools natively. Instead of the agent reading API docs and making HTTP calls manually, it discovers your tools automatically and calls them like built-in functions.
With the AppScreenshotStudio MCP server installed, you can say “generate App Store screenshots for my app” and your agent handles everything — project creation, design generation, and rendering.
Run this in your terminal:
claude mcp add appscreenshotstudio -- npx -y @appscreenshotstudio/mcp
Then set your API key as an environment variable. Add to your shell profile (.bashrc, .zshrc, etc.):
export APPSCREENSHOTSTUDIO_API_KEY="sk_live_your_key_here"
Add this to your MCP configuration file (.cursor/mcp.json, settings.json, etc.):
{
"mcpServers": {
"appscreenshotstudio": {
"command": "npx",
"args": ["-y", "@appscreenshotstudio/mcp"],
"env": {
"APPSCREENSHOTSTUDIO_API_KEY": "sk_live_your_key_here"
}
}
}
}The MCP server gives Claude the tools. The skill teaches Claude how to use them well — it adds a structured workflow that researches your codebase for brand colors, key screens, and app context before generating, so you get better results on the first try.
After setting up the MCP server above, run:
npx @appscreenshotstudio/mcp install-skill
This copies a skill file to ~/.claude/skills/appscreenshotstudio/.
In Claude Code, type /appscreenshotstudio or just ask “generate App Store screenshots for my app”. Claude will:
generate-screenshots with rich contextWithout the skill: Claude has the tools but figures out the workflow on its own. With the skill: Claude follows a proven research-first workflow that produces better screenshots with fewer iterations.
The server exposes 8 tools that your agent discovers automatically.
Create a complete set of App Store screenshots. The agent researches your codebase first, then passes rich context for accurate, app-specific designs. Returns a project URL where you upload device screenshots and export final PNGs.
| Parameter | Type | Description |
|---|---|---|
| app_name* | string | Name of the app |
| app_description* | string | What the app does (1-3 sentences) |
| features | string[] | Key features in order of importance (max 10) |
| brand_colors | object | { primary, secondary?, accent? } — hex colors |
| mood | string | "energetic", "calm", "minimal", "bold", "professional", "playful" |
| device_id | string | Target device (default: iphone-6.9) |
| count | number | Number of cards, 3-10 (default: 5) |
| story_flow | string | "auto", "standard", "problem-solution", "benefit-first", "hero-intro", etc. |
| codebase_context | object | App context from codebase research: readme_summary, key_screens, color_tokens, target_audience, app_category, competitive_edge, ui_style, primary_user_flow, tech_stack. Persisted on the project for all future edits. |
Example prompt
“Generate App Store screenshots for my todo app called TaskFlow. It helps people organize tasks with smart due dates. Brand color is #7C3AED. Make 5 cards.”
Make changes to an existing project using natural language. Edit text (bold, italic, underline, highlight pills, gradient fills), colors, layouts, decorative shapes, laurel stats badges, floating UI snippets, device angles, and more. Optionally target specific cards.
| Parameter | Type | Description |
|---|---|---|
| project_id* | string | From a previous generate call |
| message* | string | What to change |
| card_indices | number[] | Target specific cards by index (0-based). If omitted, all cards are editable. |
| codebase_context | object | App context to enrich the edit. Same schema as generate-screenshots. |
Example prompt
“Add laurel wings around a 4.9 rating, scatter decorative sparkle shapes, and underline the key word in the headline.”
Upload local app screenshots into the device mockups of an existing project. Reads PNG, JPG, or WEBP files from your local filesystem, converts them to base64, and places them into the device frames — no browser needed.
| Parameter | Type | Description |
|---|---|---|
| project_id* | string | From a previous generate-screenshots call |
| screenshots* | array | Array of { file_path, card_index } — maps local files to card indices (0-based). Max 10. |
Example prompt
“Upload my Simulator screenshots into the TaskFlow project. Use ~/Desktop/screen1.png for card 0 and ~/Desktop/screen2.png for card 1.”
Export a project to high-resolution PNGs. Returns download URLs for each card. Free — no credit cost.
| Parameter | Type | Description |
|---|---|---|
| project_id* | string | Project to render |
Example prompt
“Render the TaskFlow screenshots to PNGs.”
Retrieve a project's current state — cards, elements, backgrounds, device mockups, and metadata. Use to inspect what's been generated before making edits.
| Parameter | Type | Description |
|---|---|---|
| project_id* | string | Project ID to retrieve |
Example prompt
“Show me the current state of the TaskFlow project.”
Generate an AI background for a specific card. Uses project metadata (brand colors, mood, theme) for contextual results. The background is applied directly to the card.
| Parameter | Type | Description |
|---|---|---|
| project_id* | string | Project containing the card |
| card_index* | number | Which card to apply the background to (0-based) |
| prompt* | string | Description of the background you want |
Example prompt
“Generate a dark gradient background with purple accents for card 0.”
Get a research checklist and strategy guide before generating screenshots. Returns file patterns to search in the codebase, story flow recommendations per app category, headline writing tips, and the codebase_context schema to fill in. Helps the agent gather the right information for the best results on the first try.
| Parameter | Type | Description |
|---|---|---|
| app_category | string | App category if known (fitness, finance, social, productivity, etc.) |
| platform | string | "ios", "android", or "both" (default: ios) |
Example prompt
“I need to create screenshots for a fitness app on iOS. What should I research first?”
Show all supported device specs. No API call or credits needed.
Example prompt
“What devices does AppScreenshotStudio support?”
Research
The agent calls prepare-screenshot-brief and searches your codebase for app name, features, colors, screens, and target audience.
Generate
The agent calls generate-screenshots with rich codebase_context for accurate, app-specific designs. Context is persisted on the project.
Iterate
Ask for changes. The agent calls edit-screenshots to refine colors, headlines, layout, or story flow. Codebase context carries over automatically.
Upload
The agent calls upload-screenshots with your local Simulator or emulator screenshots to fill the device mockups — no browser needed.
Export
The agent calls render-screenshots to export final PNGs, or click “Download All” in the web app.
Pass any of these as device_id when generating screenshots.
| Device ID | Name | Size | Store |
|---|---|---|---|
| iphone-6.9 | iPhone 16 Pro Max | 1260 × 2736 | Required |
| ipad-13 | iPad Pro 13" | 2064 × 2752 | Required |
| android-phone | Android Phone | 1080 × 2340 | Required |
| android-tablet-10 | Android Tablet 7" | 1200 × 1920 | Required |
| apple-watch-ultra | Apple Watch Ultra 2 | 410 × 502 | Required |