Claude's 3 New Creative App MCP Connectors: What Works, What Fails, and What's Actually Useful
Blender, Adobe, and SketchUp MCP connectors are live. SketchUp built an apartment with no doors. Here's an honest breakdown of all three.
Anthropic Just Launched 3 Creative App MCP Connectors. One Built an Apartment With No Doors.
Anthropic released a set of MCP connectors for Claude this month, and the creative tools community noticed immediately. Blender, Adobe, and SketchUp — three connectors, three very different results. If you’ve seen the hot takes about Claude replacing 3D artists and video editors, here’s what actually happened when someone ran the tests.
The SketchUp connector generated a one-bedroom apartment. No door between the living room and the bedroom. No door from the bedroom to the bathroom. A kitchen island positioned in a way that makes reaching the sink a genuine puzzle. It’s the kind of floor plan you’d produce if you’d only ever read a description of an apartment but never lived in one — which, to be fair, is exactly what happened.
That’s the honest state of these connectors right now. Impressive enough to generate real artifacts. Not reliable enough to hand off to a client.
What MCP Actually Means (Before You Assume It’s Screen Control)
Before getting into each connector, one clarification matters: MCP stands for model context protocol. It is not computer use. Claude is not moving your mouse around Blender’s interface or clicking through Adobe’s menus.
Remy is new. The platform isn't.
Remy is the latest expression of years of platform work. Not a hastily wrapped LLM.
MCP is a backend protocol — computer talking to computer. Claude issues commands natively to the application through a structured interface. The distinction matters because it sets the ceiling on what’s possible. Claude can only do what the MCP server exposes. If the server doesn’t expose a function, Claude can’t reach it.
This is also why the Adobe connector’s limitations make sense once you understand the architecture, and why the Blender connector can do more than you’d expect while still failing in specific, predictable ways. The MCP and agent runtime concepts are worth understanding before you build expectations around any of these integrations.
Blender: Genuinely Impressive, Genuinely Expensive
The Blender MCP connector is the most technically ambitious of the three. Setup requires Claude Desktop, the “control your computer” skill enabled, and the Blender MCP installed — available directly under Claude Desktop’s Connectors section via an install button. There’s also been an unofficial version circulating for over a year, so the underlying idea isn’t new.
The benchmark used for testing was the Blender Guru donut tutorial. If you’re not in the 3D world: it’s a four-hour beginner tutorial that’s become the canonical “can this tool actually do Blender” test. The prompt was simple — “Make me a donut in Blender” — and Claude produced something. Not nothing. Something.
What followed was two hours of back-and-forth iteration. Claude added a coffee cup. The coffee cup clipped into the donut. The sprinkles clipped through the plate. At one point, Claude decided the scene needed to be set in a desert. At another, it changed the coffee mug handle into something resembling a Bavarian pretzel, unprompted, for reasons that remain unclear.
The camera angle problem was persistent. Claude kept defaulting to extreme macro photography framing when asked to render, then overcorrecting when told to pull back. When texture files from the original Blender Guru tutorial were loaded in — PBR files, downloaded mid-session — the donut did look noticeably more realistic. The coffee cup got a decent texture pass. But the clipping issues remained.
The final failure mode is the most instructive one. After extended iteration, Claude went magenta. The entire render — magenta. This is what happens when you blow past the LLM context window. The model loses coherence, starts congratulating itself on work it hasn’t finished, and eventually crashes out. The session consumed approximately 60% of the session tokens on a 5x Max plan — which runs $200 per month. One donut tutorial. Sixty percent of a monthly plan’s session budget.
That’s not a knock on the technology so much as a calibration point. If you’re a Blender expert, the MCP connector is probably most useful as an assistant for complex, tedious tasks — the kind of work that involves navigating hundreds of nodes or layers to find a single problematic element. Hiokazu Yokohara has demonstrated this well: using MCP for geometry node graphs, where the value isn’t in generating the scene from scratch but in handling the parts of the workflow that are mechanically complex and time-consuming. The geometry node graphs in those examples are the kind of thing that gives anyone with ComfyUI experience a mild anxiety response.
Remy doesn't write the code. It manages the agents who do.
Remy runs the project. The specialists do the work. You work with the PM, not the implementers.
For beginners hoping Claude will just learn Blender for them? The output is better than nothing, and arguably better than what a complete novice would produce manually. But it’s not the four-hour Blender Guru tutorial. It’s a donut with a desert background and a pretzel-handled coffee cup that clips through everything.
Adobe: The Connector That Isn’t What It Sounds Like
The Adobe connector is where the gap between marketing framing and actual capability is widest. When Anthropic announced Adobe integrations alongside Blender and SketchUp, the reasonable inference was that Claude could now work with Photoshop, Premiere, and Illustrator. It cannot.
The connector operates at Adobe Express level. That’s a meaningful distinction. Adobe Express is a simplified, browser-based tool aimed at social media content and quick edits. Photoshop is a professional compositing application. Premiere is a nonlinear video editor. Illustrator is a vector design tool. These are not the same product category, and the MCP connector does not bridge that gap.
What the connector can do: reframe images for different aspect ratios, apply basic adjustments, handle the kinds of tasks Adobe Express was built for. The test case was a 9x16 conversion of an image — a standard social media reframe. The process took 3 minutes and 14 seconds. The subject ended up off-center. Claude offered to fix the centering, which would have taken another 3 minutes and 14 seconds. A manual crop in Photoshop takes about 13 seconds.
White balance adjustments showed similar results. The output leaned magenta (a recurring theme in this review, apparently). Rather than wait another three to four minutes for a second pass, the faster path was just opening Photoshop and doing it directly.
The honest use case for the Adobe connector right now is narrow: if you’re already working in an automated pipeline and need Claude to handle Express-level tasks without human intervention, it works. If you’re expecting it to replace any part of a professional Adobe workflow, it doesn’t. The connector is real, the integration is functional, and the ceiling is lower than the announcement implied.
Platforms like MindStudio handle this kind of multi-tool orchestration differently — 200+ models, 1,000+ integrations, and a visual builder for chaining agents and workflows — which matters when you’re trying to build pipelines that span tools with different capability ceilings, rather than assuming any single connector covers the full stack.
SketchUp: The Most Successful Failure
SketchUp is the connector that produced the most usable output and also the most structurally absurd one. The test: generate a small one-bedroom apartment. Claude produced a complete file. Walls, rooms, furniture, a kitchen island.
No door between the living room and the bedroom. No door from the bedroom to the bathroom. A kitchen island positioned such that accessing the sink requires navigating around it in a way that would fail any basic usability review.
The file imported cleanly. The spatial relationships between rooms were roughly correct. The proportions weren’t embarrassing. If you’re trying to communicate a rough layout concept — the kind of thing you’d sketch on a napkin before a client meeting — this output is in that territory. It’s not a construction document. It’s not even a schematic. But it’s a starting point that a human with SketchUp knowledge could iterate on.
Coding agents automate the 5%. Remy runs the 95%.
The bottleneck was never typing the code. It was knowing what to build.
Claude’s own suggestion after seeing the output: take a screenshot, send it back, and it’ll make the corrections. That feedback loop is the actual workflow here. Claude generates a first pass, you review it, you describe what’s wrong, it adjusts. The SketchUp connector is probably the most honest of the three about what agentic 3D generation currently looks like — it produces something real, with obvious errors, that a skilled human can fix faster than Claude can.
Of the three connectors, SketchUp is arguably the most successful, which says something about where the bar is set right now.
The Agentic Video Problem (A Related Data Point)
The connector review comes alongside a separate but related test: fully agentic video editing versus human-edited shorts. The comparison is instructive.
A human-edited 30-second short required five cuts, some sound effects for transitions, and an adjustment layer. That’s it. Simple edit, clear intentionality. The fully agentic version — same script, same reference characters, no direction given — came out choppy. The cuts didn’t breathe. The rhythm was absent.
Both Cance 2.0 and Kling’s Omni model do handle L and J cuts well within a single generation. The problem is what happens when you stitch multiple 15-second clips together using ffmpeg without editorial judgment. The individual clips can be good. The assembled piece lacks the rhythm that comes from a human deciding which moment to cut on and why.
This isn’t a permanent limitation — agentic editing workflows will improve. But the current state is that a skilled editor working with AI tools produces better output than an agent working alone. The agent needs the human to set the intent.
That dynamic shows up across all three connectors. Claude with Blender MCP is better when a Blender expert is driving it. Claude with SketchUp is better when an architect is reviewing the output. The tool amplifies skill; it doesn’t replace it.
Understanding how to structure that human-AI collaboration is increasingly the core skill. The Claude overview and agent-building fundamentals are worth having in your mental model before you start building workflows around any of these connectors.
Where These Connectors Are Actually Useful
The honest answer is: in the hands of someone who already knows the tool.
The Blender MCP connector is useful for navigating complex node graphs, finding problematic layers in large scenes, and handling the mechanical parts of a workflow that don’t require creative judgment. Hiokazu Yokohara’s geometry nodes example is the right mental model — not “Claude builds the scene,” but “Claude handles the part of the scene that would take me 45 minutes of clicking.”
The Adobe connector is useful if you’re building automated pipelines that need Express-level operations without human intervention. It’s not useful if you’re expecting Photoshop-level control.
The SketchUp connector is useful as a first-pass generator for rough spatial concepts. It will produce something you can react to. It will not produce something you can ship.
All three connectors share the same fundamental constraint: they’re as good as the MCP server’s exposed functions, and they degrade as the context window fills. The magenta render at the end of the Blender session isn’t a Blender problem or a Claude problem specifically — it’s what happens when any LLM runs out of context and starts confabulating. If you’re building workflows around these connectors, context management isn’t optional. It’s the whole game.
Everyone else built a construction worker.
We built the contractor.
One file at a time.
UI, API, database, deploy.
For builders thinking about how to structure AI-assisted creative workflows more systematically, the agent memory architecture from the Claude Code source leak offers a useful framework for thinking about how context and memory interact in long-running agentic tasks — the same failure modes show up whether you’re editing video or rendering donuts.
The connectors are real. The limitations are real. The useful applications are narrower than the announcements suggested, and more interesting than the dismissals allow. A skilled professional using these tools as assistants will produce better work faster. An unskilled user hoping the tools will do the skilled work for them will get an apartment with no doors.
That’s where we are. It’s a reasonable place to be, given where we were a year ago. It’s also not where the hot takes landed.
One more thing worth flagging for builders thinking about the spec-to-artifact pipeline more broadly: tools like Remy take a different approach to generation — you write a spec in annotated markdown, and it compiles into a complete TypeScript full-stack application, backend, database, auth, and deployment included. The source of truth is the spec; the code is derived output. It’s a different layer of abstraction than MCP connectors, but the underlying question is the same: how precisely can you specify intent, and how faithfully does the tool execute it? The SketchUp apartment without doors is a spec problem as much as a model problem.