Cursor's $2B Coding Empire Was Quietly Running on a Chinese Model. A Developer Found the Receipt.
On March 19, Cursor shipped Composer 2 — its most capable proprietary coding model to date. The company marketed it as a breakthrough: faster completions, better multi-file edits, a tenth of the price of Claude Opus 4.6. Elon Musk endorsed it. Developers rushed to try it.
Less than 24 hours later, a developer named Fynn found a model ID hidden in Cursor's API configuration: kimi-k2p5-rl-0317-s515-fast. That string decoded to "Kimi 2.5 plus reinforcement learning" — Kimi being an open-source model from Moonshot AI, a Chinese startup backed by Alibaba and HongShan.
The discovery set off one of the biggest transparency controversies in the AI coding tool space this year. Here's what happened, why it matters, and what it means for every developer who trusts their editor's AI.
The Discovery That Unraveled the Marketing
Cursor's blog post announcing Composer 2 said nothing about Moonshot AI, Kimi, or any open-source foundation. The messaging positioned it as Cursor's own creation — a proprietary model that could outperform Claude Opus 4.6 at a fraction of the cost.
But the model ID in the API told a different story. Within hours of Fynn's discovery, Yulun Du — Moonshot AI's head of pre-training — tested Composer 2's tokenizer and confirmed it matched Kimi K2.5's tokenizer identically. The evidence was undeniable.
Elon Musk weighed in with a characteristically blunt assessment: "Yeah, it's Kimi 2.5."
What Cursor Actually Admitted
Cursor's vice president of developer education, Lee Robinson, acknowledged the base: "Yep, Composer 2 started from an open-source base!" He argued that only about a quarter of the compute spent on the final model came from Kimi K2.5, with the remaining three-quarters from Cursor's own reinforcement learning training.
Co-founder Aman Sanger went further, calling it a "miss" not to mention the Kimi base in their blog post. "We'll fix that for the next model," he said.
The company framed the relationship as an "authorized commercial partnership" through Fireworks AI, which hosted the RL training and inference. Moonshot AI's official Kimi account on X corroborated this, congratulating Cursor on the integration.
The Licensing Problem Nobody Can Ignore
This isn't just about credit. Kimi K2.5's license contains a specific clause: any commercial product with more than 100 million monthly active users or generating more than $20 million in monthly revenue must prominently display "Kimi K2.5" in its user interface.
Cursor's annualized revenue stands at approximately $2 billion — roughly 8x above that $20 million monthly threshold. At no point did Cursor display Kimi K2.5 branding in its interface.
Whether this constitutes a license violation depends on the specifics of Cursor's commercial agreement with Fireworks AI. But the optics are damaging regardless. Developers who chose Cursor partly based on the "proprietary model" narrative now know that narrative was incomplete at best.
Why This Matters for Every Developer
The Cursor-Kimi incident exposes a structural problem in the AI coding tool market: you don't know what's under the hood.
When you pay for Cursor Pro at $20/month, you're trusting that the AI completions, the multi-file edits, and the code suggestions are coming from a model whose training data, security posture, and governance you can evaluate. If the model is actually built on an open-source foundation from a Chinese startup — even a legitimate, well-regarded one — that changes the risk calculus for certain use cases.
Consider these scenarios:
- Enterprise compliance: Companies with data residency requirements or restrictions on Chinese-origin AI may need to reassess their Cursor deployments
- Code security: If the base model was trained on different data than advertised, the security implications of code suggestions change
- Competitive intelligence: Developers building competing products may not want their code patterns flowing through a model they didn't choose
The U.S.-China AI Tension Makes This Worse
The timing couldn't be more fraught. Just days earlier, President Trump's administration blacklisted Anthropic as a "national security risk" for refusing to let Claude be used for surveillance. The Pentagon is actively debating which AI models can touch government contractor workflows.
In this climate, a $2 billion American coding tool quietly running on a Chinese open-source model — without disclosure — is a story that writes itself. Cursor's co-founder acknowledged this, saying the "fraught political climate around U.S.-China AI competition" was one reason they hesitated to disclose the Kimi base upfront.
That hesitation may have been understandable. But the failure to disclose made the eventual revelation far more damaging than transparent attribution would have been.
What Moonshot AI Gets Out of This
For Moonshot AI, this is actually a win. Their open-source Kimi K2.5 — released in January 2026 — was already gaining traction, but having it validated as the backbone of a $2 billion coding tool is the ultimate endorsement.
Kimi K2.5 was released as an open-source model specifically designed for coding tasks. It supports mixture-of-experts architecture and was trained on what Moonshot describes as "the largest code-focused dataset in the Chinese AI ecosystem." The model competes directly with Claude and GPT-5 on coding benchmarks.
The partnership through Fireworks AI suggests Moonshot is pursuing a deliberate strategy: give away the base model, then monetize through commercial licensing and hosting partnerships. This is the same playbook Meta used with Llama — and it's working.
How Cursor Compares After the Revelation
The competitive landscape for AI code editors is brutal right now. Windsurf (formerly Codeium) offers similar capabilities. GitHub Copilot has the distribution advantage. Claude Code from Anthropic offers terminal-based AI coding with full transparency about which model you're using.
Cursor's advantage was always the perception that it had unique, proprietary AI capabilities worth the premium price. The Kimi revelation doesn't invalidate the tool's quality — the reinforcement learning Cursor applied on top of Kimi K2.5 clearly produced strong results. But it does weaken the "proprietary" narrative.
For developers evaluating code editors in 2026, the question is no longer just "which tool writes better code?" It's also "which tool is transparent about what's actually running?"
The Broader Lesson: Open Source Is Eating AI From the Inside
The deepest takeaway from this episode isn't about Cursor's PR failure. It's about the economics of AI.
Building foundation models is extraordinarily expensive. Even well-funded companies like Cursor — with $2 billion in annual revenue — find it more efficient to start from an open-source base and add proprietary fine-tuning than to train from scratch. This is happening across the industry:
- Over 90% of developers at Salesforce now use Cursor, driving double-digit improvements in cycle time
- Fireworks AI has built a business hosting and fine-tuning open-source models for commercial customers
- Meta's Llama models power an estimated 40% of commercial AI applications, often without attribution
The open-source-to-commercial pipeline isn't a bug. It's the new standard. The only question is whether companies will be transparent about it or get caught.
Key Takeaways
- ✓Cursor's Composer 2 was built on Moonshot AI's Kimi K2.5, discovered via a hidden model ID in the API
- ✓Cursor says 75% of compute was their own RL training, but the base model and tokenizer come from Kimi
- ✓Kimi's license requires prominent attribution for products over $20M/month revenue — Cursor is at ~$167M/month
- ✓The partnership through Fireworks AI is authorized, but the lack of initial disclosure is the core controversy
- ✓U.S.-China AI tensions amplify the fallout — undisclosed Chinese model dependencies are a political liability in 2026
- ✓Open-source models are now the backbone of commercial AI — most major tools build on them without clear attribution
- ✓Developers should demand transparency about which models power their AI code editors
Skila AI Editorial Team
The Skila AI editorial team researches and writes original content covering AI tools, model releases, open-source developments, and industry analysis. Our goal is to cut through the noise and give developers, product teams, and AI enthusiasts accurate, timely, and actionable information about the fast-moving AI ecosystem.
About Skila AI →Related Resources
Weekly AI Digest
Get the top AI news, tool reviews, and developer insights delivered every week. No spam, unsubscribe anytime.
Join 1,000+ AI enthusiasts. Free forever.