
Figure AI Lawsuit 2025
In 2025, the Figure AI lawsuit is getting a lot of attention in the tech and robot world. A former worker, Robert Gruendel, says he was fired not because he did bad work, but because he spoke out about big safety problems. According to Gruendel, the humanoid robots built by Figure AI are so powerful that, in testing, they could “fracture a human skull.” His version of events raises big red flags about how this startup tests and controls its machines.
Gruendel worked at Figure AI as the head of product safety. He claims he warned executives many times about dangerous behaviour in the robots. He says he documented his fears clearly—and made the most direct and serious safety complaints just days before he was let go. According to him, his warnings were ignored or downplayed by management, who allegedly prioritized speed over safety.
In the Figure AI lawsuit, Gruendel says there was a real incident in testing. He describes a robot leaving a deep cut in a steel refrigerator door. That detail is chilling: it suggests the robot’s strength is not just theoretical. He says that in tests, these machines can push or hit much harder than what is safe for a human body, and this could cause very serious injuries.
He also says that when he tried to make a safety plan with rules and limits, the bosses did not agree and resisted it.They allegedly made his proposals weaker or delayed putting them in place. From Gruendel’s point of view, the company was moving too fast, chasing growth and valuation, and ignoring serious human risk.
The Figure AI lawsuit demands more than a public apology. Gruendel is asking for economic damages, compensatory and punitive damages, and he wants a jury trial. He says his firing was retaliation—punishment for being honest about a threat that other leaders apparently didn’t take seriously. This doesn’t just touch on a personal dispute. It raises critical questions: when we build powerful robots, who is responsible for safety? And what happens when someone warns that things may go wrong?
Figure AI, for its part, strongly denies the worst of Gruendel’s allegations. In their response, they argue that his termination was due to performance issues, not his safety concerns. They say that the tests for their robots are careful and strict. They also say that the lawsuits give the wrong idea about how they work and make them look careless, which is not fair.
Still, the Figure AI lawsuit has drawn serious attention. This is not just another office dispute. This is a problem that includes ethics, engineering and safety in a fast-growing field. Robotics companies like Figure AI get support from big investors. These companies are usually gain billions of dollars and feel to grow fast. But the stronger a robot is, the more dangerous it could be if something goes wrong.
Gruendel’s case can have effects beside just this one company. If he wins or gets a settlement then other robotics companies may have to follow new rules. They could be asked to improve safety, add stronger protections and pay more attention to engineers who warn about problems. It could also lead to more whistle-blower protections in fields where hardware risk is very real.
There is another consequence too: public trust. The Figure AI lawsuit risks shaking faith in robot startups. When people hear that a robot can seriously damage human bones then they may worry about having these machines in everyday. That could slow adoption of helpful robots in homes, hospitals, or factories. It could also spook investors who worry not just about making profit, but about liability and reputation.
Regulators may also take notice. Right now, robotics safety is still a patchwork. There are rules, but many parts of the industry move faster than laws. Cases like this Figure AI lawsuit could push lawmakers to create stricter guidelines for testing powerful robots, especially those meant to interact in close physical proximity with humans. It might demand stronger certification procedures or mandatory safety testing before deployment.
From Gruendel’s side, he argues he acted out of responsibility. He claims he wanted to prevent disaster—not to block innovation. In his view, raising safety issues is part of building smart machines, not a drag on progress. A robot that works properly and safely is more useful than one that is impressive but inherently risky.
On a personal level, the lawsuit is also about his reputation. Gruendel is take a big risk by speaking truth. He is losing his job, but he thinks people need to know about the dangers. He says he hopes the Figure AI lawsuit will show the world that safety matters more than market hype. For him, this is not just about money—it is about being responsible.
People who are watching the case closely think it is an important moment. The Figure AI lawsuit could change the way we think about humans working with robots. If powerful humanoid robots become more common, the standards we set now matter. It’s not enough to build something that can walk or carry objects. We must also ask: how hard can it push? What happens if someone is in its way?
The future of robotics may depend on this moment. If Figure AI takes Gruendel’s seriously, they become a leader in both innovation and responsibility. If they don’t, they risk reputation, legal costs, and possibly worse. Other companies will be watching this Figure AI lawsuit closely.
In the end, the Figure AI lawsuit is a warning. It show that with great power great responsibility comes. As artificial intelligence and robots come together, we are making machines that are very strong and fast. But being strong is not enough. Safety must always come first. Gruendel’s case is pushing that message hard. And whether he wins in court or not, he is making people think carefully about the cost of ignoring risk in the name of progress.
For more: https://aimagazine.com/
https://theflickdaily.com/wp-admin/post.php?post=3721&action=edit
