MNSA-2026-004

Cursor Composer 2 Model Provenance Misrepresentation

High
Published
2026-03-20
Last Updated
2026-03-20
Prepared by
Monachus Solutions
Severity
High
Affected Product
Cursor IDE — Composer 2 Model
Affected Versions
All versions with Composer 2 enabled
Fixed Versions
No resolution as of publication

Executive Summary

Cursor (operated by Anysphere Inc.) shipped a rebranded derivative of Moonshot AI’s Kimi K2.5 model as its own “in-house” model (Composer 2) without required attribution, violating the Modified MIT License terms. This raises vendor trust, IP compliance, and supply chain continuity risks for all organizations using Cursor.

FieldDetails
Prepared ByMonachus Solutions
Advisory DateMarch 20, 2026
VendorCursor / Anysphere Inc.
Affected ProductCursor IDE — Composer 2 Model
Upstream VendorMoonshot AI (Kimi K2.5)
SeverityHIGH
StatusActive — No resolution as of publication
DistributionAll Monachus Clients

1. What This Means For Your Organization

Event Summary

On March 19, 2026, Cursor (operated by Anysphere Inc.) launched Composer 2, a coding model marketed as the product of in-house continued pretraining and scaled reinforcement learning. Within 24 hours, a leaked API model identifier and independent tokenizer analysis confirmed that Composer 2 is a reinforcement-learning fine-tuned derivative of Moonshot AI’s open-weight Kimi K2.5 model. Cursor’s blog post, benchmark charts, and product documentation made zero mention of Kimi K2.5 or Moonshot AI.

Moonshot AI’s head of pretraining publicly accused Cursor of violating the Kimi K2.5 license terms, which require prominent attribution for commercial users generating over $20 million per month in revenue. Cursor generates approximately $167 million per month.

Vendor Relationship Context

Cursor is a widely adopted AI-assisted coding IDE used by individual developers and engineering teams across industries. Organizations that deploy Cursor for software development are relying on Cursor’s model infrastructure to process, analyze, and generate source code. The Composer 2 model is positioned as Cursor’s default and recommended model for agentic coding workflows, meaning any organization using Cursor’s latest features is routing proprietary code through this model.

Data Exposure Risk

This is not a data breach in the traditional sense. The risk is structural, not exfiltration-based. Organizations using Cursor are sending proprietary source code, configuration files, API keys, internal documentation, and architectural patterns through a model whose true provenance was misrepresented. The concern is not that Moonshot AI has access to your data — Cursor hosts the model on its own infrastructure. The concern is threefold:

  1. Vendor transparency failure: Cursor deliberately obscured what model processes your code. If they misrepresent model provenance, what else might be misrepresented about data handling, model routing, or privacy controls?
  2. IP provenance uncertainty: Code generated by an unlicensed model derivative creates ambiguity about the intellectual property chain. While no legal precedent exists, the question of downstream IP exposure from license-violating model deployments is genuinely unsettled.
  3. Supply chain integrity: If Moonshot AI enforces the license through litigation or injunction, Cursor may be forced to pull or degrade Composer 2, disrupting workflows that depend on it.

Service Dependency Impact

Any engineering workflow that uses Cursor’s Composer 2 model is affected. This includes agentic code generation, multi-file refactoring, code review assistance, test generation, and documentation workflows. Organizations that have standardized on Cursor as their primary development environment face the highest exposure to disruption if the model is pulled or degraded.

Urgency Level

HIGH — No immediate data breach, but the vendor trust implications are significant. Organizations should assess their Cursor dependency, document their exposure, and prepare contingency plans within the next 1–2 weeks. Do not wait for Cursor’s official response before beginning your internal assessment.

Client-Facing Summary

If your customers, partners, or stakeholders ask whether you are affected by the Cursor/Kimi K2.5 controversy, the following language is appropriate:

“We are aware of the reports regarding Cursor’s Composer 2 model and its relationship to Moonshot AI’s Kimi K2.5. We are conducting an internal review of our Cursor usage and dependency. Our source code is processed on Cursor’s infrastructure and we have no indication of unauthorized data access. We are evaluating alternative tooling options and will update our vendor risk assessment accordingly. We take AI supply chain transparency seriously and are monitoring this situation closely.”

2. Am I Affected?

Risk Assessment

FactorRatingReasoning
Vendor TrustHIGHCursor deliberately misrepresented the provenance of its flagship model to users, investors, and the market. This is not an accidental omission — the model ID was left intact, prior Kimi K2.5 integration was removed from the UI, and the blog post was crafted to avoid any mention of the base model.
License ComplianceHIGHCursor’s revenue ($167M/month) exceeds the Kimi K2.5 license attribution threshold ($20M/month) by 8x. The attribution requirement is explicit, covers derivative works, and Cursor’s product displays zero attribution.
Supply Chain RiskHIGHIf Moonshot AI pursues legal enforcement, Cursor may be forced to remove Composer 2 from production. Organizations dependent on this model face workflow disruption with no advance warning.
Data ExposureLOWNo evidence of unauthorized data access. Cursor hosts the model on its own infrastructure. The risk is provenance misrepresentation, not data exfiltration.
IP ContaminationMEDIUMCode generated by an unlicensed model derivative creates theoretical IP provenance questions. No legal precedent exists, but the ambiguity warrants documentation for audit trail purposes.

Evidence Summary

Technical Evidence

  • Leaked model ID: A developer intercepted Cursor’s API response revealing the model identifier accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast — explicitly naming the Kimi K2.5 base model within Anysphere’s namespace.
  • Tokenizer match: Moonshot AI’s head of pretraining confirmed that Composer 2’s tokenizer is identical to the Kimi K2.5 tokenizer, consistent with a reinforcement-learning fine-tune on top of the original model weights.
  • Prior UI integration: Cursor’s community forum contains a February 9, 2026 post reporting Kimi K2.5 appeared as a free model in Cursor’s model picker, then vanished after an update, consistent with the model being pulled from the UI before being relaunched as Composer 2.
  • Documentation page: Cursor maintained a dedicated Kimi K2.5 documentation page at cursor.com/docs/models/kimi-k2-5, describing it as a “Coding model from Moonshot,” confirming internal awareness and integration work.

Licensing Evidence

  • Kimi K2.5 is released under a Modified MIT License hosted on GitHub (MoonshotAI/Kimi-K2.5).
  • The license requires prominent display of “Kimi K2.5” branding for commercial products exceeding $20M monthly revenue.
  • Cursor’s annualized revenue exceeds $2 billion (approximately $167M/month), confirmed by multiple outlets.
  • Cursor’s product interface, blog post, and documentation display “Composer 2” with zero reference to Kimi K2.5 or Moonshot AI.

Vendor Response Evidence

  • Moonshot AI’s head of pretraining publicly tagged Cursor’s co-founder Michael Truell asking why the license was not being respected.
  • Two additional Moonshot employees posted confirmations that Cursor was not licensed for this use.
  • All posts were deleted within hours, suggesting a shift to legal/private negotiation.
  • As of March 20, 2026, Cursor has issued no public statement.

Historical Pattern

This is not an isolated incident. When Cursor launched the original Composer (v1) in October 2025, researcher Sasha Rush was asked directly whether it was a fine-tune of an existing open-source model. His response avoided answering the question, focusing instead on Cursor’s RL post-training work. Hacker News commenters subsequently alleged that Composer 1 was based on Qwen, another Chinese open-source model, after detecting an identical tokenizer — the same forensic method that exposed Composer 2. This pattern suggests a deliberate strategy of using open-weight models as base layers while marketing the result as proprietary.

Self-Assessment Checklist

Use the following questions to determine your organization’s exposure level:

  1. Does your organization use Cursor as a development tool?
  2. Have developers used the Composer 2 model specifically (vs. Claude, GPT, etc.)?
  3. Is proprietary or sensitive source code being processed through Cursor?
  4. Has your organization standardized on Cursor (vs. offering it as an optional tool)?
  5. Does your vendor risk assessment for Cursor document which models process your code?
  6. Are there compliance requirements (SOC 2, ISO 27001, HIPAA) that require knowing your AI supply chain?
  7. Do you have a contingency plan if Cursor degrades or removes Composer 2?

If you answered “yes” to 3 or more: proceed immediately to the remediation plan in Section 3.

3. Close the Loop

Immediate Actions (24–48 Hours)

  1. Inventory Cursor usage: Identify all teams, repositories, and workflows using Cursor. Determine whether Composer 2 is selected as the active model (it is the default in recent versions).
  2. Document current model selection: Screenshot or log the model configuration for each Cursor instance in your organization. This creates an audit trail regardless of how the situation resolves.
  3. Pin model selection: Where possible, configure Cursor to use a known, directly-licensed model (Claude, GPT-4) rather than Composer 2 until the situation is resolved.
  4. Brief engineering leadership: Ensure your CTO/VP Engineering is aware of the situation and the vendor trust implications. Use this advisory as the briefing document.
  5. Monitor Cursor communications: Watch for an official response from Cursor/Anysphere. As of this advisory’s publication, none has been issued.

Short-Term Actions (1–2 Weeks)

  1. Update your vendor risk assessment for Cursor: Downgrade Cursor’s vendor trust score. Add a finding for “Model provenance misrepresentation” with the evidence documented in Section 2 of this advisory.
  2. Request transparency from Cursor: If your organization has an enterprise relationship with Cursor, formally ask: (a) which base model powers Composer 2, (b) whether they hold a valid license for commercial use, and (c) what their plan is for license compliance.
  3. Evaluate alternative AI coding tools: Assess alternatives including GitHub Copilot, Windsurf (Codeium), and direct API integrations with known model providers. This is a contingency exercise, not necessarily a migration trigger — but you need options ready.
  4. Review AI acceptable use policy: Ensure your organization’s acceptable use policy for AI tools addresses model provenance and vendor transparency. If it doesn’t, add it.

Long-Term Actions (30–90 Days)

  1. Establish AI supply chain documentation requirements: Any AI tool processing proprietary code should disclose which models are in use, their licensing terms, and the data processing chain. Build this into your vendor assessment questionnaire.
  2. Implement model provenance verification: For critical AI tooling, implement periodic checks of model identifiers and behavior signatures to detect undisclosed model changes. The same API interception technique that exposed Composer 2 can be used proactively.
  3. Re-evaluate Cursor relationship: Based on Cursor’s response (or lack thereof), make a formal continue/migrate decision. If Cursor resolves the license dispute and adds attribution, the product itself remains functional. If they stonewall, consider migration.
  4. Update incident response playbook: Add “AI vendor model provenance dispute” as a scenario category. This is a new class of vendor risk that existing playbooks likely do not cover.

Escalation Path

ChannelPath
InternalEngineering Leadership → CISO/Security Lead → Legal/Compliance → Executive Team
VendorCursor Account Manager → Cursor Security Team → Cursor Legal (formal written request)
AdvisoryContact your Monachus Solutions engagement lead for updated guidance as this situation evolves

Compliance Impact

This incident has implications for organizations operating under the following frameworks:

SOC 2 (Trust Services Criteria)

CC9.2 requires risk assessment of vendor dependencies. A vendor misrepresenting its technology stack is a material finding that should be documented in your vendor risk register.

ISO 27001 (Annex A)

Controls A.5.19–A.5.23 (supplier relationships) require organizations to monitor and review supplier services. Undisclosed model changes constitute a failure of supplier transparency.

GDPR / Privacy

If Cursor processes any personal data as part of code (e.g., test data, configuration with PII), the data processing chain has changed without notice. Review your DPA with Cursor for data sub-processor disclosure requirements.

ISO 42001 (AI Management)

For organizations pursuing AI governance certification, this incident highlights the need for AI supply chain transparency controls. Model provenance documentation should be a mandatory element of your AIMS.

Owner Assignments

Action CategorySuggested OwnerTarget
Cursor usage inventoryEngineering Ops / DevOps24–48 hours
Model configuration pinningEngineering Team Leads48 hours
Vendor risk assessment updateSecurity / GRC Team1 week
Formal vendor inquiryVendor Management / Legal1 week
Alternative tool evaluationEngineering Leadership2 weeks
AI policy updateCISO / Security Lead30 days
Supply chain documentationGRC / Compliance60 days
Continue/migrate decisionCTO / Executive Team90 days

Broader Industry Implications

This incident is a test case for the enforceability of open-weight AI model licenses. The Kimi K2.5 Modified MIT License represents a common approach in the open-weight ecosystem: permissive use with a commercial attribution requirement above a revenue threshold. If Cursor successfully argues that RL fine-tuning creates a non-derivative work, it would render virtually every open-weight model license’s attribution clause unenforceable. Conversely, if Moonshot AI enforces and prevails, it establishes precedent that derivative works must honor attribution requirements regardless of the degree of fine-tuning.

For organizations that consume AI models — whether through vendor tools like Cursor or through direct API access — this incident underscores the need for AI supply chain transparency as a first-class security requirement. Model provenance, licensing compliance, and vendor honesty about what processes your data are no longer edge cases. They are core vendor assessment criteria.

References

  • Kimi K2.5 Modified MIT License — github.com/MoonshotAI/Kimi-K2.5
  • Cursor Composer 2 blog post — cursor.com/blog
  • Cursor Kimi K2.5 documentation page — cursor.com/docs/models/kimi-k2-5
  • Leaked API model identifier — accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast
  • Cursor community forum — Kimi K2.5 model picker reports (February 9, 2026)

Prepared by Monachus Solutions | Advisory Date: March 20, 2026 | Severity: HIGH | Status: Active

For questions or updated guidance, contact your Monachus engagement lead.