What Is the Cursor Composer 2 Controversy? How Open-Source Attribution Works in AI
Cursor built Composer 2 on Kimi K2.5 without disclosure. Learn what happened, why it matters for open-source AI, and what the license actually requires.
When Your AI Tool Doesn’t Say What’s Powering It
The Cursor Composer 2 controversy caught many developers off guard — not because of its explosive nature, but because of what it revealed about the open-source attribution norms that hold the AI ecosystem together.
In mid-2025, community members discovered that Cursor’s flagship multi-file editing feature, Composer 2, was running on Kimi K2.5, a model released by Moonshot AI. Cursor hadn’t said so publicly. No mention in the changelog, no documentation update, no acknowledgment that the feature relied on an external open-weight model.
The fallout raised questions that matter well beyond Cursor’s product decisions: What do AI model licenses actually require? When does building on open-weight models create an obligation to disclose? And what does transparency mean in the age of commercial AI products?
What Is Cursor Composer 2?
Cursor is an AI code editor built on Visual Studio Code. It integrates large language models directly into the development workflow, letting engineers write, refactor, explain, and debug code through natural language. It’s grown quickly and become a go-to tool for developers who want tight AI integration in their everyday work environment.
Composer 2 is Cursor’s most technically ambitious feature. Traditional AI coding assistants help you work on one file at a time. Composer 2 takes a different approach: it reads and edits across multiple files simultaneously, coordinating changes through an entire codebase based on a single natural language description.
That’s a genuinely hard problem. It requires a model to maintain coherent context over long sequences and understand how changes in one file ripple through others. Cursor positioned Composer 2 as a major upgrade, and early user feedback was positive.
What Cursor didn’t mention: the model doing the heavy lifting was Kimi K2.5.
What Is Kimi K2.5?
Kimi K2.5 is a large language model developed and released by Moonshot AI, a Beijing-based AI research company known for its work on long-context language understanding and reasoning.
Moonshot has released several iterations of the Kimi model family, progressively improving performance on code generation, multi-step reasoning, and complex instruction following. Kimi K2.5 was notable for being offered as an open-weight model — meaning Moonshot published the trained model weights for external use, rather than locking them behind a proprietary API.
That openness matters. Publishing model weights allows other developers to run the model locally, fine-tune it, integrate it into products, and build on top of it. It’s a direct contribution to the broader AI research and development ecosystem.
But open-weight doesn’t mean unrestricted. Like most publicly released AI models, Kimi K2.5 comes with a license. That license includes conditions about how the model can be used, referenced, and credited — conditions that sit at the center of the Cursor controversy.
How the Community Uncovered It
The discovery didn’t come from Cursor. It came from developers doing their own detective work.
Large language models have recognizable behavioral signatures. The way a model formats responses, handles edge cases, produces specific tokens, or fails in particular ways can function as a fingerprint. Researchers have developed systematic methods for model identification — some based on direct probing (prompting with inputs known to produce distinctive outputs from specific models), others based on statistical analysis of large response samples.
In this case, multiple developers noticed that Composer 2 was behaving in ways that matched Kimi K2.5’s known patterns. The fingerprinting tests weren’t definitive proof in isolation, but the consistency of results across independent tests made a strong case.
Findings circulated on X, GitHub, and developer forums. The conversation quickly moved from “is this actually Kimi?” to “why didn’t Cursor say so, and does this violate the license?” Cursor’s muted initial response — which neither confirmed nor denied the underlying model — compounded the frustration.
What the Kimi K2.5 License Actually Requires
This is where the controversy has real technical and legal weight. Understanding what happened requires understanding how open-weight AI models are licensed — and where those licenses leave genuine room for interpretation.
Open-Weight Is Not the Same as Open-Source
The terms are often used interchangeably, but they mean different things.
Open-source software means source code is published under a license — such as MIT, Apache 2.0, or GPL — that grants specific rights to use, modify, and redistribute, subject to conditions. True open-source also typically includes transparency about how the software was built.
Open-weight AI models publish the trained parameters (the “weights”) that define how the model behaves. But the training data, fine-tuning details, and infrastructure are often proprietary. You can run the model, but you don’t necessarily have full visibility into how it was created.
By the Open Source Initiative’s definition of open-source AI, very few models qualify. Most “open” AI models are more accurately described as open-weight — which is valuable, but a narrower form of openness.
What the License Terms Cover
AI model licenses are often adapted from software licenses, with additional provisions specific to how models are deployed and presented. Common elements include:
- Permission for commercial use — Most permissive licenses allow the model to be used in commercial products
- Attribution requirements — Credit to the original model developers must appear in documentation or product materials
- Prohibition on misrepresentation — The license may explicitly prohibit presenting the model as original proprietary work
- Derivative work terms — Conditions on fine-tuning and distributing modified versions
Kimi K2.5’s license includes attribution requirements. When a company uses the model in a commercial, public-facing product, the expectation is that Moonshot AI receives credit as the source.
The Deployment Gray Area
Here’s where it gets genuinely complicated: what counts as “distributing” an AI model?
Traditional software licenses tie attribution requirements to distribution — sharing software with others. If you run software internally without distributing it, many permissive licenses don’t require attribution in the same way.
But when Cursor deploys Kimi K2.5 on its servers and serves outputs to thousands of paying users, is that distribution? Most current AI model licenses were not written with this question fully in mind. Two reasonable positions exist:
-
The narrow view: Cursor is using the model internally to generate outputs. It’s not distributing the model itself. Attribution requirements may not apply in the same way as they would if Cursor were redistributing the weights.
-
The broad view: The license’s spirit — and potentially its explicit language — requires credit when the model is used commercially to serve users. The distinction between “internal use” and “commercial deployment at scale” is meaningful.
Some AI model licenses have started explicitly addressing commercial hosting, requiring attribution in user-facing product documentation regardless of whether the weights are redistributed. If Kimi K2.5’s license includes such language, Cursor’s non-disclosure moves from a transparency failure to a more direct compliance issue.
What Attribution Would Have Looked Like
If Cursor had met a standard attribution requirement, the minimum would have been:
- A disclosure in product documentation: “Composer 2 is powered in part by Kimi K2.5, developed by Moonshot AI”
- A link to the Kimi K2.5 license
- No marketing language implying the underlying model was proprietary Cursor technology
None of that requires significant effort. The absence of it made the situation look like a deliberate choice rather than an oversight.
Why Companies Often Don’t Disclose Their Model Stack
Cursor’s non-disclosure isn’t unusual. There are real reasons companies don’t volunteer this information — even if those reasons don’t fully justify the approach.
Competitive sensitivity is the most common justification. Model selection is treated as strategic information. If a product’s capabilities come partly from a specific model, disclosing it hands competitors a roadmap.
Flexibility to change is another factor. Publicly committing to a specific model creates expectations that are awkward to walk back. Teams want to swap models as better options appear without it becoming a public story.
The “added value” framing is how internal teams often think about it. A product built on Kimi K2.5 may also include fine-tuning, custom prompt engineering, context management, and retrieval architecture. From an engineering perspective, these additions feel like the real product. The base model is an input, not the output.
That framing has limits, though. When the base model is released under a license with attribution requirements, commercial use without disclosure is a different situation from using a proprietary third-party API. You’re using something someone released openly under terms that include credit — and choosing not to give that credit.
There’s also a user trust issue. Developers paying for Cursor have a reasonable expectation of knowing what’s powering the tools they depend on. Whether the model is Kimi K2.5 or something else affects assessments of capability, vendor risk, and even data governance — since Moonshot AI is a Chinese company subject to different regulatory frameworks than US-based providers.
What This Means for Open-Source AI
The Cursor situation isn’t an anomaly. It’s an early signal of a pattern that will intensify as open-weight model releases become more common.
The License Legitimacy Problem
When developers and companies release AI models openly, they’re accepting real risk. Open releases benefit the broader ecosystem — they enable research, benchmarking, fine-tuning, and new applications. But they also hand potential commercial users a resource that can be built into products without acknowledgment.
If attribution norms aren’t enforced — through community pressure, legal action, or better license design — the incentive to release models openly decreases. Model developers who see their work commercialized without credit have fewer reasons to keep releasing. This is the same dynamic that shaped open-source software licensing debates for decades, now replaying in an AI context.
License Design Needs to Catch Up
Many AI model licenses are still catching up to deployment realities. Licenses written with distribution in mind don’t map cleanly onto hosted inference. The field needs license language that explicitly covers:
- Commercial API serving and hosted inference
- Embedding models in consumer-facing products
- Attribution requirements tied to visibility and usage scale
- Fine-tuning and redistribution of modified versions
The Open Source Initiative has published criteria for what constitutes open-source AI, and model-specific frameworks like RAIL (Responsible AI License) attempt to address behavioral restrictions and attribution simultaneously. Progress is happening, but the gap between license intent and practical deployment is still significant.
Transparency as Infrastructure
There’s a constructive version of this story. Some AI products are building model transparency into their design. Being explicit about which model powers a feature doesn’t just serve legal compliance — it builds user trust, enables informed decision-making, and makes the product easier to debug and audit.
The Cursor controversy is partly a reminder that transparency isn’t an ethical extra. In an ecosystem built on open models and shared research, it’s foundational infrastructure.
How MindStudio Handles Model Transparency
If you’re building AI applications, knowing which model is running your core functionality isn’t optional — it’s a basic product requirement.
MindStudio’s no-code platform treats model selection as an explicit, first-class decision. When you build an AI agent or automated workflow, you choose the model directly from a library of 200+ options — Claude, GPT-4o, Gemini, and many others — without separate API accounts or configuration overhead. The model powering your application is always visible, documented, and under your control.
This matters for the same reasons the Cursor controversy does. If your application behaves unexpectedly, you can trace it to a specific model and its known characteristics. If you’re operating in a regulated environment, you can demonstrate to stakeholders exactly what’s powering your application. If a better model becomes available for your use case, you can switch models and test the difference explicitly — without any ambiguity about what changed.
That transparency also extends to the teams you build for. When a client or stakeholder asks “what AI is running in this?” you have a direct answer, because you made that choice deliberately rather than inheriting it from a black-box product layer.
You can start building with any of MindStudio’s models at mindstudio.ai — no API keys required, free to start.
Frequently Asked Questions
Did Cursor violate Kimi K2.5’s license?
Whether Cursor violated the license technically depends on the specific terms of the Kimi K2.5 license and how those terms apply to commercial API deployment. Most open-weight model licenses have attribution requirements that clearly apply to redistribution of the model itself. Whether hosting a model to serve users constitutes “distribution” under those licenses is a genuine legal gray area that hasn’t been resolved by courts. What’s clearer is that Cursor’s non-disclosure ran against community norms, the spirit of open-source attribution, and potentially against explicit language in the Kimi license around commercial use disclosures or misrepresentation.
What does open-source attribution mean for AI models?
Attribution in AI licensing means crediting the original developers of a model when you use it in a product or derivative work. In practice, this typically means including the model name and developer credit in your product documentation, linking to the original license, and not representing the model as your own proprietary creation. For open-weight models released to benefit the broader community, attribution is the mechanism that ensures original developers receive recognition for the substantial investment they made in releasing the model publicly.
Why do AI companies hide which models power their products?
The main reasons are competitive sensitivity, operational flexibility, and the perception that disclosure might undermine the product’s value proposition. Companies worry that disclosing the underlying model gives competitors a roadmap. They also want freedom to swap models without making news. There’s also an internal framing where the base model is just an input, with real value coming from fine-tuning, prompt engineering, and surrounding infrastructure. That framing breaks down when the base model is open-weight and released under an attribution-requiring license — at that point, non-disclosure stops being just a business call.
What is the difference between open-weight and open-source AI?
Open-weight AI means the trained model weights — the parameters that define the model’s behavior — are publicly released. Open-source AI, by the stricter definition, means the entire system is open: the weights, the training code, and the training data. Truly open-source AI is rare because training data involves complex rights and competitive considerations. Most “open” AI models are open-weight only, which still offers significant benefits for research and deployment, but is a narrower form of openness than the term “open-source” usually implies in software contexts.
Can I legally use open-weight AI models in commercial products?
Usually yes, but you need to read the specific license. Most open-weight model licenses permit commercial use. The conditions vary: some require attribution in documentation, some restrict certain types of commercial deployment above a revenue or usage threshold, and some prohibit specific uses regardless of commercial intent. The Kimi K2.5 situation is a good reminder that “open-weight” doesn’t mean “no strings attached” — and that the strings matter more as your product scales.
What should developers do when a tool doesn’t disclose its underlying model?
Ask the company directly — many will answer, even if they don’t volunteer the information proactively. Look for technical blog posts, changelog notes, or community forum discussions where model details sometimes appear. If model transparency is essential to your use case — for regulatory compliance, vendor risk assessment, or technical evaluations — prioritize tools that make model selection explicit. And when building your own AI applications, document which models you’re using from day one. It’s far easier than reconstructing that information later.
Key Takeaways
The Cursor Composer 2 controversy is a concrete example of how open-source AI licensing works in practice — and where it breaks down under commercial pressure.
- Cursor built Composer 2 on Kimi K2.5 without public disclosure, which the developer community identified through model fingerprinting techniques
- Kimi K2.5 is an open-weight model from Moonshot AI released under a license that includes attribution requirements — triggering genuine questions about compliance
- The legal gray area centers on whether commercial hosting and serving of a model constitutes “distribution” under open-weight model licenses — a question most existing licenses don’t cleanly answer
- Non-disclosure is common in the AI product industry, but it conflicts with open-source norms and in cases like this one, potentially with explicit license terms
- The broader implication: if attribution norms erode, model developers have fewer reasons to release work openly — which ultimately weakens the ecosystem everyone is building on
Model transparency matters whether you’re evaluating a third-party AI tool or building your own application. MindStudio makes that transparency straightforward — choose from 200+ models, know exactly what’s running in your workflow, and build applications you can fully explain. Try MindStudio free at mindstudio.ai.