Skip to main content
MindStudio
Pricing
Blog About
My Workspace

Elon Musk vs. OpenAI Trial: 5 Most Damaging Moments from Musk's First Day on the Stand

After 5 hours of testimony, a Verge reporter wrote she'd 'never been more sympathetic to Sam Altman.' Here are the 5 most damaging moments from Musk's…

MindStudio Team RSS
Elon Musk vs. OpenAI Trial: 5 Most Damaging Moments from Musk's First Day on the Stand

Five Hours on the Stand, and Elon Musk Came Out Worse

Elon Musk took the stand in his lawsuit against OpenAI this week, and by the time he stepped down, a reporter covering the trial had typed a sentence she probably didn’t expect to write: “I have never been more sympathetic to Sam Altman in my life.”

That line came from The Verge’s real-time trial update page, which was posting updates roughly every 15 minutes from inside the courtroom. After 5 hours of Musk’s testimony, the reporter had watched Musk refuse to answer yes-or-no questions with yes or no, argue with defense attorney William Savit, and occasionally forget things he’d testified to earlier the same morning. At least one juror was seen rubbing her head. Others exchanged glances during testy exchanges.

You don’t need to be a legal analyst to understand what that means. When jurors are visibly reacting to a witness’s demeanor, that witness is not having a good day.

Here are the five most damaging moments from Musk’s first day on the stand — drawn from what’s been reported so far — and why each one matters to the broader case.


1. He Refused to Answer Simple Yes-or-No Questions

This is the one that set the tone for everything else.

Not a coding agent. A product manager.

Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.

BY MINDSTUDIO

Defense attorney William Savit — the lawyer representing OpenAI — repeatedly asked Musk yes-or-no questions. Musk repeatedly declined to answer them as yes or no. According to The Verge’s reporter, this happened not once or twice but consistently across hours of testimony.

In a courtroom, this is a significant problem. Juries are not legal experts. They read behavior. When a witness refuses to give a direct answer to a direct question, the natural inference is that the direct answer would be damaging. Whether or not that’s true, the optics are bad — and they compound over five hours.

The pattern also handed Savit a structural advantage. Every time Musk deflected, Savit could return to the question, and the jury watched the evasion repeat. That’s not a good loop to be stuck in.


2. He Contradicted His Own Morning Testimony

By the afternoon session, Musk had reportedly forgotten things he’d testified to earlier that same day.

This matters for a specific reason: it’s not just about memory. Inconsistencies within a single day of testimony are exactly the kind of material opposing counsel uses to undermine credibility on everything else. If Musk can’t keep his account straight across a few hours, why should the jury trust his account of events from years ago — the founding of OpenAI, the alleged promises made, the alleged betrayal?

The lawsuit hinges on Musk’s claim that Sam Altman and Greg Brockman tricked him into donating money to OpenAI under false pretenses, only to later pivot the company toward profit. That’s a claim that requires the jury to believe Musk’s version of events over Altman’s. Contradicting yourself within a single day of testimony is not the way to build that credibility.


3. He Scolded the Defense Attorney — Repeatedly

Musk didn’t just argue with Savit. He scolded him.

According to The Verge’s account, Musk’s exchanges with Savit turned testy on multiple occasions. Jury members were seen glancing at each other during these moments. One woman was rubbing her head.

Scolding opposing counsel is almost never a good look for a witness. Attorneys are trained to absorb hostility and redirect it. Witnesses who lose their composure in response to routine cross-examination questions signal to the jury that they’re rattled — or that they have something to protect. Either reading is bad for Musk’s case.

There’s also a specific dynamic at play here. Musk is one of the most recognizable people on the planet. Jurors came into that courtroom with existing impressions of him. Some of those impressions are probably positive. But watching someone famous behave badly in a formal setting tends to erode goodwill faster than it builds it. The Verge reporter’s “never been more sympathetic to Sam Altman” line is a data point on exactly that dynamic.


4. OpenAI’s Defense Framing Got Airtime

While Musk was on the stand, OpenAI’s defense was also getting its arguments into the record — and those arguments are pointed.

VIBE-CODED APP
Tangled. Half-built. Brittle.
AN APP, MANAGED BY REMY
UIReact + Tailwind
APIValidated routes
DBPostgres + auth
DEPLOYProduction-ready
Architected. End to end.

Built like a system. Not vibe-coded.

Remy manages the project — every layer architected, not stitched together at the last second.

OpenAI’s position is that this lawsuit isn’t really about OpenAI’s mission. It’s a bid to boost Musk’s own competing companies: SpaceX, xAI, and X. The argument is that Musk, having launched Grok as a direct competitor to ChatGPT, has a financial interest in damaging OpenAI’s reputation and operations. The lawsuit, in this framing, is a competitive weapon dressed up as a principled stand.

That framing is now in front of the jury. And every hour Musk spent on the stand behaving erratically gave that framing more room to breathe. It’s harder to argue you’re acting out of principle when you’re also visibly angry, evasive, and combative. The two things reinforce each other in ways that are hard to walk back.

For context on what Musk is actually demanding: he wants Altman and Greg Brockman removed from OpenAI, and he wants the company to stop operating as a public benefit corporation. Those are significant asks — and the more his testimony undermines his credibility, the less likely a court is to grant them. (The sister piece to this one covers those demands and OpenAI’s counter-framing in more detail.)


5. The Jury Watched All of It

This one isn’t a single moment. It’s the cumulative weight of the other four.

Trials are not won on legal arguments alone. They’re won on credibility, narrative, and the impressions a jury forms over hours of watching real people under pressure. Musk had five hours on the stand. In that time, he refused direct questions, contradicted himself, and scolded an attorney while jurors exchanged glances.

The Verge reporter’s line — “I have never been more sympathetic to Sam Altman in my life” — is significant not because she’s a legal authority, but because she’s a trained observer who covers technology and has presumably watched a lot of tech executives navigate difficult situations. If a reporter who covers this space came out of five hours of Musk testimony feeling sympathy for his opponent, that’s a signal worth taking seriously.

Jury members don’t have the reporter’s context. But they have eyes. And they were watching.


What This Means for the Case Going Forward

The trial is not over. There’s more testimony to come, and Musk’s legal team will have opportunities to rehabilitate his credibility. Courts have seen worse first days. Cases have been won after rocky starts.

But the first day on the stand matters. It sets the frame through which jurors interpret everything that follows. If Musk’s team can’t reset that frame — can’t give the jury a reason to trust Musk’s account of OpenAI’s founding over OpenAI’s account — the demands at the center of this case become very hard to win.

Those demands are not small. Removing Altman and Brockman from OpenAI would be a seismic outcome. Forcing OpenAI to stop operating as a public benefit corporation would reshape how the company is governed and how it pursues its mission. These are the kinds of remedies courts grant when they’re convinced a plaintiff has been genuinely wronged — not when they’re watching a witness argue with attorneys and forget his own testimony.

Plans first. Then code.

PROJECTYOUR APP
SCREENS12
DB TABLES6
BUILT BYREMY
1280 px · TYP.
yourapp.msagent.ai
A · UI · FRONT END

Remy writes the spec, manages the build, and ships the app.

The AI industry has been watching this case closely, partly because its outcome could affect how AI companies structure themselves and who controls them. OpenAI’s governance has been under scrutiny since the board drama of late 2023, and questions about whether its public benefit structure is compatible with its commercial ambitions are genuinely unresolved. For anyone building on top of OpenAI’s models — and there are a lot of builders doing exactly that, given how token-based pricing for AI models has made API access increasingly accessible — the question of who runs OpenAI and under what mandate is not abstract.

The competitive dynamics are also real. Anthropic, OpenAI, and Google are each making different strategic bets on AI agents, and the outcome of this trial could affect how aggressively OpenAI pursues commercial partnerships and military contracts. That’s a meaningful variable for anyone building on the OpenAI stack.

For builders evaluating which models to build on, the governance question matters alongside the capability question. GPT-5.5 vs Claude Opus 4.7 is a useful frame for evaluating raw performance, but who controls these companies — and under what legal and ethical constraints — is part of the same decision. Platforms like MindStudio that support 200+ models give builders a hedge here: if one provider’s governance situation becomes untenable, you’re not locked in.

The trial is also a reminder of how much the AI industry’s current structure depends on relationships and informal agreements made years ago, before anyone fully understood what these companies would become. Musk’s core claim is that he was promised something — a nonprofit committed to open, beneficial AI — and that the promise was broken. Whether or not that’s legally actionable, it’s a story a lot of people in the industry find plausible. The question is whether Musk can tell it convincingly.

Based on day one, he’s going to need a better day two.

There’s a broader pattern here that’s worth naming. The people building the most consequential AI systems are also the people most likely to end up in courtrooms arguing about what they promised each other in 2015. The legal infrastructure for AI governance is still being built in real time — and cases like this one are part of how it gets built. Tools like Remy represent a different kind of infrastructure question: when you’re compiling a full-stack application from an annotated spec, the spec is the source of truth, and the code is derived output. The analogy to AI governance isn’t perfect, but the underlying problem is similar — who controls the source of truth, and what happens when the derived outputs diverge from what was originally intended?

That question is going to be in courtrooms for a long time.

The Verge’s real-time updates from the trial are worth bookmarking if you want to follow along. The next few days of testimony will determine whether Musk’s first day was a rough start or a preview of how the whole case goes. Given what happened on day one, OpenAI’s legal team has to feel good about where things stand.

Sam Altman, for his part, hasn’t said much publicly. He doesn’t need to. The Verge reporter said it for him.

Presented by MindStudio

No spam. Unsubscribe anytime.