Skip to main content
MindStudio
Pricing
Blog About
My Workspace
AI ConceptsLLMs & ModelsComparisons

What Is the Cursor Composer 2 Controversy? How Open-Source Attribution Works in AI

Cursor built Composer 2 on Kimi K2.5 without disclosure. Learn what happened, why it matters for open-source AI, and what the license actually requires.

MindStudio Team
What Is the Cursor Composer 2 Controversy? How Open-Source Attribution Works in AI

What Happened With Cursor Composer 2

In mid-2025, users of Cursor — the popular AI-powered code editor — started noticing something odd about Composer 2, an upgraded version of the tool’s core AI coding assistant. When prompted in specific ways, the model’s responses matched patterns associated with Kimi K2.5, an open-source model released by Chinese AI company Moonshot AI.

The Cursor Composer 2 controversy quickly became one of the more discussed examples of open-source attribution and model transparency in AI products — not just for what Cursor did or didn’t do, but for what it exposed about how the entire industry handles open-source model use.

This article covers what actually happened, how open-source AI licensing works, and what “attribution” actually requires — legally and ethically.


The Discovery and What It Revealed

The pattern in AI communities is familiar by now. Someone notices a model’s behavior matches a known open-source release, posts about it, and others run their own comparisons. In this case, community members ran what’s sometimes called identity probing — prompting the model in ways that reveal underlying architecture or checking for response signatures that match specific known models.

The results pointed consistently toward Kimi K2.5. Response patterns, reasoning structure, and specific output characteristics aligned with Moonshot AI’s model rather than with anything proprietary.

What Cursor Said (and Didn’t Say)

Cursor had framed Composer 2 as a significant upgrade — better performance, more capable assistance. What wasn’t mentioned was that the experience users were paying for was substantially powered by a publicly available open-source model, not something Cursor had built internally.

The controversy wasn’t really that Cursor used Kimi K2.5. Using open-source models is standard practice across the AI industry. The issue was that users weren’t told. That gap between what was implied and what was actually happening drove the backlash.

When the situation became public, Cursor acknowledged using Kimi K2.5. But by then the conversation had shifted to a harder question: what do AI companies actually owe users — and the open-source community — when they build on openly licensed models?


What Is Kimi K2.5?

Kimi K2.5 is a large language model developed by Moonshot AI, a Beijing-based AI research company. It’s part of the Kimi K series, which focuses specifically on reasoning, technical problem-solving, and long-context tasks.

The model was released with open-source weights, meaning anyone can download and use it directly. Moonshot AI has been part of a broader wave of open model releases from companies including Meta (Llama series), Alibaba (Qwen), and Mistral AI — all releasing models that approach or match proprietary frontier models on specific benchmarks.

Why It Made Sense for Cursor to Use It

Kimi K2.5 performs competitively on coding benchmarks, often approaching GPT-4o or Claude on specific programming tasks. For a product like Cursor, which is entirely focused on code generation and editing, this kind of specialized performance is directly relevant.

Building a foundation model from scratch is extraordinarily expensive and time-consuming. Most software companies — even well-funded ones — can’t justify it. Using a strong open-source model as a foundation and building product quality on top of it (custom system prompts, integrations, UI, context handling) is a completely rational approach.

The problem wasn’t the decision. It was the silence around it.


The Attribution Question: What Does the License Actually Require?

This is where the controversy gets technically interesting, and where a lot of the discourse misses important nuance.

How Kimi K2.5 Is Licensed

Moonshot AI released Kimi K2.5 under terms that permit commercial use and modification — but with attribution requirements. “Attribution” in this context means acknowledging the original model and its creators, typically in documentation, a model card, or release materials.

This is a meaningful distinction: the license didn’t require Cursor to display “Powered by Kimi K2.5” in their UI. It required acknowledgment of the model’s origins somewhere in their official materials.

Whether Cursor met that bar depends on what appeared in their documentation, terms of service, or model cards — details that weren’t immediately clear during the controversy. If attribution existed only in legal fine print users never see, that’s arguably compliant but still ethically thin.

Here’s the uncomfortable truth: a company can be fully compliant with an open-source license while still being misleading to users.

Satisfying the legal attribution requirement might mean a mention in a README file, a line in a licensing page, or an entry in a model card. None of that reaches the person paying $20/month and assuming they’re getting a proprietary AI system.

The Cursor situation made visible a gap that exists across most of the AI industry: the minimum required by the license and the honest communication users expect are often very different things.


How Open-Source AI Licensing Actually Works

Open-source software licensing has four decades of history and well-established norms. Open-source AI model licensing is maybe three years old. The frameworks are still catching up to the technology.

The Main License Categories

Permissive licenses (MIT, Apache 2.0) These allow almost any use — commercial, modification, redistribution — with minimal conditions. Attribution is typically required in source code or documentation, not in the user-facing product. The barrier is low.

Copyleft licenses (GPL, AGPL) These require derivative works to be released under the same license. AGPL specifically addresses the SaaS loophole (more on that below). Very few AI models use AGPL because it’s too commercially restrictive.

Custom AI licenses Meta’s Llama Community License, various Mistral license variants, and Moonshot’s model terms fall into this category. They’re purpose-built documents that don’t map cleanly onto traditional open-source categories. They often allow commercial use up to a certain scale, restrict specific applications, and define attribution in their own terms.

Dataset-specific licenses Creative Commons variants (CC-BY for attribution-required, CC-BY-SA for share-alike) appear more in training data than in model weights, but they affect what models can legally be built on.

The SaaS Loophole

This matters a lot for understanding the Cursor situation.

Traditional copyleft licenses were written with distributed software in mind — you ship code to users, who run it on their machines. When that code is covered by GPL or similar licenses, you must release your source code if you distribute the software.

But when software runs on a server and users interact with it only through an API or web interface, the user never receives the software. They only receive its outputs. This means the distribution trigger for copyleft requirements often doesn’t apply.

The same logic affects AI models. Even a model with strict sharing requirements may not trigger those requirements when it’s used as a backend service, because you’re not distributing the model — you’re distributing its outputs. The Open Source Initiative has been working to address this gap in their definition of open-source AI, but there’s no universal standard yet.

What Attribution Means in Practice

For traditional software, attribution means keeping copyright notices and license files intact. For AI models, it’s significantly less defined:

  • Does crediting the model in a technical README satisfy the requirement?
  • Do end users of a product need to be informed of the underlying model?
  • If you fine-tune a model, is the resulting model a derivative work requiring its own disclosure?

Courts haven’t ruled on most of these questions. The open-source AI community is actively debating them. Companies are making judgment calls in a legal gray zone, and the Cursor controversy is one result of that ambiguity.


Why Model Transparency Actually Matters to Users

The legal questions are genuinely uncertain. The practical stakes for users are not.

You’re Making Decisions Based on What You Think You’re Getting

When you subscribe to a product, you’re forming expectations. If you believe Composer 2 uses a proprietary model developed specifically by Cursor’s AI team, you’re evaluating the product differently than if you know it’s built on a publicly available open-source model that you could potentially access through other means.

That difference matters for:

  • Trust — If a company isn’t transparent about which AI model powers their core feature, what else might they obscure?
  • Value assessment — Paying $20–40/month for a product wrapper around a free model isn’t inherently wrong, but users deserve to understand what exactly they’re paying for.
  • Competitive evaluation — Knowing the underlying model lets you make meaningful comparisons with alternatives.

The Open-Source Social Contract

Open-source AI development operates on a kind of implicit agreement. Companies and researchers release models freely, accepting real commercial disadvantage, because they believe in open development or want their work recognized and used widely.

When a company quietly builds a product on someone’s open-source work without acknowledgment, it undermines that agreement. It signals to model developers that releasing openly may result in their contributions being absorbed silently rather than credited and built upon in ways that acknowledge the source.

Moonshot AI invested significant resources in developing and releasing Kimi K2.5. Their reputation, future funding, and competitive position are partly tied to how widely their work is recognized. Silent adoption doesn’t serve those interests — and over time, it could disincentivize future open releases from companies that see what happens when they give work away.

The Broader Industry Pattern

Cursor isn’t uniquely bad here. The honest assessment is that this is a widespread industry practice, and Cursor got caught in a particularly visible way.

There’s a spectrum of disclosure practices:

  1. Full transparency — The model is named in the product UI, documentation, and marketing.
  2. Technical disclosure — The model is acknowledged in release notes or license files, but not surfaced to users.
  3. Silent integration — The model is used with no public acknowledgment anywhere.
  4. Implied proprietary — The product implies originality it doesn’t have.

The Cursor situation appeared to fall somewhere between the third and fourth categories — which is why the community reaction was as pointed as it was.


What Good Practice Looks Like

Companies that handle model disclosure well tend to follow a clear pattern.

  • They name the underlying models in documentation and, where relevant, in the product itself.
  • They explain what proprietary work they’ve built on top — fine-tuning, system prompt engineering, integrations, context handling.
  • They update users when the underlying model changes, treating it as a meaningful product change rather than an internal engineering detail.
  • They acknowledge open-source contributions in their communications.

This isn’t just ethical behavior — it’s strategically sound. Users who clearly understand what they’re paying for (the product layer, the UX, the integrations, the workflow) are better positioned to see the real value than users who later feel they were deceived about the foundation.

Transparency also helps when models change. If users know the product runs on Kimi K2.5 today, they’re more prepared for a future announcement that it’s switching to a different model — and less likely to feel the rug has been pulled out.


How MindStudio Handles Model Visibility

The Cursor controversy is fundamentally about what happens when model choice is hidden from users. MindStudio takes the opposite approach.

When you build an AI agent or workflow in MindStudio, model selection is explicit and central to the experience. The platform gives builders access to 200+ AI models — including Claude, GPT-4o, Gemini, and a growing list of open-source models — and you choose which model powers each step of a workflow. You can see exactly what’s running, why you chose it, and how it compares to alternatives.

This matters practically for builders. If you’re deploying an AI product for your team or your customers, you need to know what model is doing the work. You need to evaluate it against alternatives, understand its cost profile, and make informed decisions about when to upgrade or switch.

MindStudio’s visual builder lets you test different models across the same workflow — run Kimi K2 on one step and Claude Sonnet on another, compare outputs, and tune accordingly. That kind of model-level visibility is built into how the platform works, not bolted on as an afterthought.

If you’re building AI-powered tools and want clarity on what’s running under the hood, you can start for free at mindstudio.ai.


Frequently Asked Questions

Did Cursor actually violate Kimi K2.5’s open-source license?

The honest answer is: it’s unclear, and the ambiguity itself is revealing. Whether a violation occurred depends on the specific license terms, how Cursor implemented attribution in their documentation, and whether the SaaS loophole applies to their use case. If attribution was present only in obscure legal documentation, it may be technically compliant while still being ethically problematic. This is an area of genuine legal uncertainty in open-source AI.

What does open-source attribution actually require?

It depends entirely on the license. Most permissive licenses (MIT, Apache 2.0) require attribution in source code or documentation — not in the user-facing product. Custom AI model licenses define their own terms. Attribution generally means crediting the original creators somewhere in your official materials, but it rarely requires prominent disclosure to end users. Community norms expect more transparency than the legal minimums, and those norms are still evolving.

What is Kimi K2.5 and who made it?

Kimi K2.5 is a large language model from Moonshot AI, a Beijing-based AI research company. It’s part of the Kimi K series, which targets reasoning and coding tasks specifically. The model was released with open-source weights under terms that allow commercial use, making it accessible to companies that want to build products without developing foundation models from scratch. It performs competitively on coding benchmarks, which explains its appeal for a product like Cursor.

Why wouldn’t a company just say which model they’re using?

There are several business reasons, even if none justify misleading users. Companies may want competitive ambiguity — if competitors don’t know your model stack, they can’t replicate it easily. They may worry that users will perceive “open source” as lower quality than “proprietary,” even when performance is identical. Or they may plan to switch models and prefer not to commit publicly. Understanding the business logic doesn’t make the practice acceptable, but it explains why it’s common.

Does the SaaS loophole mean open-source AI licenses are basically unenforceable?

For copyleft-style requirements (share your modifications), the SaaS loophole does significantly weaken enforcement. If you use a model as a backend service, you may not be legally required to release your fine-tuning or modifications. For attribution requirements, the situation is different — those typically apply to any commercial use, regardless of distribution method. But enforcement is still difficult, and the norms are underdeveloped compared to traditional software licensing.

Is this problem unique to Cursor, or does the whole AI industry do this?

This is widespread. Many AI products are built on open-source or third-party model foundations without clear user-facing disclosure. Cursor became a visible example because users caught it and the evidence was compelling. The issue isn’t unique to any one company — it reflects the absence of clear industry standards for model disclosure in commercial AI products. As open-source models become more competitive, this tension will intensify.


Key Takeaways

  • Cursor used Kimi K2.5, an open-source model from Moonshot AI, to power Composer 2 without disclosing this to users — creating a significant debate about transparency and open-source obligations.
  • Whether Cursor technically violated the license depends on how and where attribution was documented, but the ethical case for clear user communication exists independently of legal compliance.
  • Open-source AI licensing borrows from traditional software frameworks that weren’t designed for models deployed as services — creating genuine legal ambiguity that the industry hasn’t resolved.
  • The SaaS loophole means that copyleft-style requirements often don’t apply to models used as backends, but attribution requirements typically still do in some form.
  • The minimum required by a license and the transparency users reasonably expect are often not the same thing — and the gap between them is where most of the industry’s credibility problems live.
  • Platforms that make model selection explicit give builders — and the products they create — a clearer, more accountable relationship with the AI infrastructure they depend on.

The Cursor controversy is a useful case study not because Cursor is uniquely bad, but because it made visible a pattern that runs through much of the AI industry. How companies handle that pattern — with transparency or opacity — will shape how much users and developers trust the products they rely on.

Presented by MindStudio

No spam. Unsubscribe anytime.