Skip to main content
MindStudio
Pricing
Blog About
My Workspace
WorkflowsIntegrationsAI Concepts

Agent Skills as an Open Standard: How Claude, OpenAI, and Google All Adopted the Same Format

Agent skills started with Claude but became an open standard adopted by OpenAI, Google DeepMind, and others. Here's what that means for AI builders.

MindStudio Team
Agent Skills as an Open Standard: How Claude, OpenAI, and Google All Adopted the Same Format

The Surprising Agreement Behind Modern AI Agents

Something worth paying attention to: Anthropic, OpenAI, and Google — three companies competing aggressively for the same developer mindshare — have all adopted the same format for describing what their AI agents can do.

The protocol at the center of this is the Model Context Protocol (MCP). Anthropic introduced it in November 2024 as an open standard for connecting AI models to external tools and data sources. Within months, OpenAI and Google had both announced support. That’s a fast adoption curve for any technical standard, and it has real consequences for anyone building with AI agents, workflows, and integrations today.

This article breaks down what agent skills are, how MCP became the standard, and what it means practically if you’re building in this space.


What Agent Skills Are and Why the Format Matters

An agent skill is a structured description of something an AI agent can do — search the web, send an email, query a database, generate an image. The AI model reads this description and decides when and how to invoke it.

Before standardization, each major AI provider had their own format for these:

  • OpenAI called them function calls, defined using JSON Schema
  • Anthropic called them tools in Claude’s API, with a similar but distinct JSON Schema format
  • Google used function declarations in Gemini’s API, different again

Each format communicated the same core information — what the function is called, what it does, what parameters it accepts — but the structural differences were enough to make tools non-portable across providers.

Why Portability Was a Real Problem

If you built a tool definition for GPT-4o, you couldn’t drop it into a Claude agent without rewriting it. If you built an integration library for one provider, it didn’t translate cleanly to others. Framework developers building things like LangChain and CrewAI maintained separate connectors for each model family.

The overhead wasn’t catastrophic, but it added up. Every new integration got rebuilt multiple times. Teams choosing an AI provider were also, implicitly, choosing an integration ecosystem. Switching later felt expensive even when the migration itself wasn’t technically complex.

A shared format fixes most of that. When the structure for describing agent skills is consistent across providers, tools become reusable assets rather than provider-specific code.


How MCP Became the Standard

Anthropic released the Model Context Protocol in November 2024 as an open-source project under an MIT license, with a public specification and reference implementations in Python and TypeScript.

MCP defines a client-server architecture:

  • MCP servers expose capabilities (tools, resources, prompts) to AI systems
  • MCP clients — AI agents or the applications that host them — connect to servers to discover and call those capabilities

The three core primitives MCP defines:

  1. Tools — callable functions the model can invoke (search, email, database query)
  2. Resources — data the model can read (files, API responses, database records)
  3. Prompts — reusable prompt templates stored server-side

Tools are by far the most-used primitive. They’re described using JSON Schema — already familiar to developers and readable enough for AI models to interpret the intent behind each one.

Here’s a simplified example of what an MCP tool definition looks like:

{
  "name": "search_web",
  "description": "Search the web for current information on a topic.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "query": {
        "type": "string",
        "description": "The search query to run"
      }
    },
    "required": ["query"]
  }
}

The model reads the description to understand when to use the tool. The inputSchema tells it exactly what to pass. Results come back in a structured format the model can use in its next step.

Why Anthropic Released It as Open Source

This wasn’t a purely altruistic move. A standard that other providers adopt grows the ecosystem of compatible tools — and all of those tools also work with Claude. The more tools that support MCP, the more valuable Claude becomes as an agent platform, even for teams primarily using other models.

The same logic has driven every successful open protocol: by making the standard free to use, Anthropic gave the broader ecosystem strong incentives to build on it.


OpenAI Joins: 2025

OpenAI announced MCP support in their Agents SDK in early 2025. The announcement was significant because OpenAI already had their own function calling format — one that most early agent frameworks were built around.

Adopting MCP meant acknowledging that an Anthropic-originated standard had gained enough momentum to matter. Their implementation lets agents built with OpenAI’s SDK connect to any MCP server, giving GPT-4o-based agents access to the growing catalog of MCP-compatible tools.

OpenAI kept their existing function calling format as an option alongside MCP. Developers can use whichever layer suits their use case — and MCP compatibility is available for those who need cross-provider portability.


Google DeepMind Joins: 2025

Google announced MCP support in 2025, integrating it into the Gemini API and related development tooling. Their Vertex AI platform — the production environment many enterprise teams use to deploy models — also moved toward MCP compatibility.

Like OpenAI, Google had their own format for tool use in Gemini. Adopting MCP was a pragmatic response to developer demand and the ecosystem momentum that had already built up around the protocol.

With three major model families supporting MCP, the standard crossed a practical threshold. At that point, it’s no longer an experiment — it’s infrastructure.


The Three-Layer Architecture

When you build with MCP, there are three components to understand:

  1. The AI application (client) — could be Claude, GPT-4o, Gemini, or a custom agent built with any framework
  2. MCP servers — each exposes a set of tools, with server-side logic handling the actual execution
  3. The underlying services — your CRM, your email provider, your database — which the servers connect to

An agent can connect to multiple MCP servers simultaneously, composing capabilities from different sources. A single agent might use one server for web search, another for database access, and a third for email — all defined using the same protocol, all accessible through the same interface.

This is what makes MCP useful beyond the format question. It’s not just shared JSON structure — it’s a shared communication layer that any model and any tool can speak.


What This Means for People Building with AI

For Developers

If you’re writing agent code today, MCP changes a few things in practice:

  • Build tools once, use them across models. An MCP server built for Claude works with GPT-4o and Gemini without modification.
  • Draw on an existing ecosystem. Thousands of MCP servers have been published since the standard launched — for databases, productivity tools, APIs, internal systems, and more.
  • Decouple your tool layer from your model choice. If you want to switch models or test the same workflow across multiple models, your tool definitions don’t change.

Frameworks like LangChain, CrewAI, and AutoGen have all added MCP support, which means the pattern integrates without requiring a rewrite of existing tooling.

For No-Code Builders

Even if you’re not writing JSON Schema directly, standardization matters. When the underlying protocol is stable and widely adopted, the platforms and integrations you build on are more likely to stay compatible over time. Your automations don’t break because a provider changed their proprietary format.

MCP support is also becoming a meaningful signal when evaluating platforms. It’s a sign that a tool is building on shared infrastructure rather than a proprietary layer that could be deprecated.

For Enterprise Teams

For organizations deploying AI at scale, standard interfaces reduce vendor risk. If your tool integrations are built on MCP, switching model providers — or running different workloads on different models — doesn’t require rebuilding your capability layer.

That flexibility matters in enterprise procurement. It’s harder to be locked in when the integration layer is shared infrastructure that three major providers all support.


Where MindStudio Fits Into the MCP Ecosystem

MindStudio has built MCP into its platform in two ways, and both directions matter for understanding where it sits in this ecosystem.

Exposing agents as MCP servers. If you build a workflow or agent in MindStudio — a lead qualification flow, a document processing pipeline, a content generation agent — you can expose it as an MCP server callable by other AI systems. Claude Code, LangChain agents, CrewAI crews, or any other MCP-compatible client can invoke that workflow as a tool without custom API work on your end.

This means the AI workflows you build in MindStudio aren’t isolated. They become part of the broader agent ecosystem, accessible to any system that speaks MCP.

The Agent Skills Plugin for developers. For teams writing their own agent code, MindStudio’s @mindstudio-ai/agent npm package offers a different abstraction. It gives agents 120+ pre-typed capabilities — agent.sendEmail(), agent.searchGoogle(), agent.generateImage(), agent.runWorkflow() — as simple method calls. Auth, rate limiting, and retries are handled automatically so agent logic stays focused on reasoning, not plumbing.

It works cleanly with Claude Code, LangChain, CrewAI, and other agentic frameworks. And since MindStudio connects to 1,000+ business tools out of the box — HubSpot, Salesforce, Google Workspace, Slack, Notion, and more — the workflows you call through MCP or the Agent Skills Plugin can trigger real actions across your existing stack.

You can start building on MindStudio for free.


Frequently Asked Questions

What is the Model Context Protocol (MCP)?

MCP is an open protocol that Anthropic introduced in November 2024 to standardize how AI models connect to external tools and data. It uses a client-server architecture: MCP servers expose tools, resources, and prompts; MCP clients (AI agents or the applications that host them) connect to those servers and invoke capabilities. OpenAI and Google both announced support in 2025, making it the primary interoperability standard for agent skills. The full MCP specification and documentation are publicly available.

Did agent skills start with Claude?

Not quite — and the distinction is worth making. OpenAI introduced function calling (the model-level mechanism for invoking structured tools) in June 2023. Anthropic added similar tool use to Claude’s API in 2023. But the open standard for agent skills — MCP as a shared, provider-neutral protocol — originated with Anthropic. OpenAI had a format; Anthropic built a protocol designed from the start for broad adoption. That’s what made it a standard rather than just one company’s API.

What’s the difference between function calling and MCP?

Function calling is the model-level behavior: an AI decides to invoke a tool and returns a structured request specifying the tool name and parameters. MCP is the transport-level protocol that governs how tool definitions are exchanged between AI systems and external servers. You can use function calling without MCP — many systems do — but MCP adds a standardized discovery and communication mechanism on top, making tool libraries portable across providers and deployable as standalone servers.

Are MCP tools compatible with every AI model that supports the protocol?

Structurally, yes. A tool exposed via an MCP server is accessible to any MCP-compatible AI client. In practice, compatibility is solid across Claude, GPT-4o, and Gemini as of 2025. How well a given model understands and uses a tool depends partly on how clearly the tool’s description is written — the model reads that description to decide when to call the tool. The protocol compatibility is consistent; model behavior varies based on how well the tool is described.

How do I start building with MCP?

Anthropic provides reference implementations in Python and TypeScript with documentation at modelcontextprotocol.io. You define your tools using JSON Schema, implement the handlers that run when each tool is called, and run the server locally or deploy it. If you’d rather not build the protocol layer yourself, platforms like MindStudio let you expose existing workflows as MCP servers without writing server code.

Why does it matter that OpenAI and Google adopted MCP?

A standard only becomes a standard when enough parties use it. With OpenAI and Google adopting MCP alongside Anthropic, any tool built to the protocol is accessible to agents running on all three major model families. For builders, this means MCP is safe to invest in — you’re building on infrastructure backed by the entire major AI provider ecosystem, not betting on a format tied to one company’s API decisions.


Key Takeaways

  • Agent skills — structured capabilities AI agents can invoke — were defined inconsistently across providers until Anthropic introduced MCP as an open standard in November 2024.
  • OpenAI and Google both adopted MCP in 2025, making it the de facto standard for agent tool interoperability across the three dominant AI model families.
  • MCP uses JSON Schema to describe tools in a format that’s both human-readable and reliably interpreted by AI models, with a standardized protocol layer for how tool definitions and results move between systems.
  • For builders, MCP means tool integrations are portable: build an MCP server once and it works with Claude, GPT-4o, and Gemini without rewriting.
  • MindStudio supports MCP natively — you can expose agents as MCP servers for other AI systems to call, and use the Agent Skills Plugin to give custom agents 120+ pre-built, typed capabilities.

If you’re building agents or workflows and want to stay compatible with the broader ecosystem without reinventing the integration layer, MindStudio is worth trying. Start free and have something running in under an hour.

Presented by MindStudio

No spam. Unsubscribe anytime.