When liability between passengers, platforms, and manufacturers becomes a battlefield
Autonomous Vehicle Accidents and the Law: A System with No Clear Driver
In the spring of 2025, a Robotaxi accident in San Francisco made headlines. A driverless vehicle operated by a leading autonomous platform collided with a left-turning private car at an intersection. A passenger in the Robotaxi sustained minor injuries.
But when it came time to assign liability and settle insurance, the passenger ran into an unexpected wall.
- The manufacturer said: “We only provide the software, we’re not responsible for operations.”
- The platform operator insisted: “The vehicle was compliant and followed all regulations.”
- The insurance company declined coverage: “Responsibility is unclear—let the courts decide.”
This wasn’t an isolated case. As Robotaxi commercialization accelerates in major cities across Europe and the U.S., legal grey zones surrounding autonomous vehicle accidents are becoming increasingly common.
This article breaks down the issue across three critical dimensions:
1. Law and Tech Misalignment: The System Is in Control, but Where’s the Driver?
Under traditional legal definitions, determining liability in traffic accidents is based on the presence and behavior of a human driver. In the Robotaxi context, the vehicle is fully autonomous—no steering wheel, no human intervention—and responsibility suddenly becomes blurred.
The EU’s updated AI Liability Directive (March 2025) proposes:
“In cases involving autonomous decision-making systems, liability should be determined based on logged control data at the point of the incident.”
In theory, this sounds progressive. In practice, it’s deeply problematic:
- Most Robotaxi systems are closed-source, and users have no access to the full decision-making logs
- Operators often refuse to disclose system data under the excuse of “trade secrets”
- Courts often lack technical experts to properly interpret logs, leading to lengthy, expensive lawsuits
Here’s the irony: Even though passengers didn’t do anything, the law can’t clearly decide whether they bear any residual risk responsibility.
2. Hollowed-Out Passenger Rights: You’re in a Taxi, but No One Claims to Be the Driver
Are Robotaxi passengers the same as those in a regular taxi? Legally, this remains a hotly debated issue.
In a 2025 Robotaxi collision in Hamburg, Germany, a passenger suffered a broken leg after the vehicle rear-ended another car. They sued the operating platform—but the platform responded:
“We are not a transportation service provider. We merely match users with available autonomous vehicles. The driving is handled by software.”
This put the court in a tough spot:
- If the platform is treated like a transport operator, it would be contractually liable for passenger safety.
- But if it’s seen as just a tech facilitator, the passenger ends up without any contractual protection.
In the U.S., the NHTSA proposed a new rule in early 2025:
“If an L4+ Robotaxi platform handles dispatching, pricing, and customer management, it should bear the same legal responsibilities as a traditional taxi company.”
But as of June 2025, the rule has not yet passed. Most Robotaxi companies still float in the grey area between being a tech company and a transportation provider.
3. A Three-Way Blame Game: Platforms, OEMs, and AI Vendors
Another grey zone lies in the blame-shifting among various parties:
- The platform says: We don’t make the vehicles.
- The manufacturer says: The vehicle was functioning as designed.
- The AI software vendor (e.g. Mobileye, Waymo): We only provide algorithms—we’re not responsible for how they’re used.
Take the May 2025 case in Paris: A Robotaxi—built by AutoX, dispatched by a local French platform—misread a red light at night and rear-ended another vehicle, causing a concussion to a backseat passenger.
The passenger sued the platform. The platform blamed AutoX. AutoX pointed fingers at the LiDAR vendor, citing faulty signal interpretation. In the end, the court ruled:
“All three parties shall pay damages jointly. Since the exact proportion of liability cannot be determined, each will cover half the agreed compensation.”
This “split-liability” verdict barely covered the passenger’s medical costs, and the case dragged on for five months. For ordinary users, the cost of legal ambiguity is clear: slow compensation, unclear rights, and eroded trust.
Conclusion: Without Clear Liability, Trust Can’t Be Engineered
Robotaxi’s legal grey zones point toward a deeper societal dilemma:
In a world without drivers, liability structures become the new foundation of user trust.
Many governments are experimenting with solutions—accident liability pools, mandatory data transparency, AI algorithm audits—but regulatory progress is slow, often stymied by competing commercial interests.
If Robotaxis are to truly replace traditional transport, it won’t be enough to have advanced sensors and smooth UX. A transparent, enforceable legal framework is just as critical as any line of code.
Otherwise, even the most futuristic mobility systems may stall—not because of hardware failure, but due to a trust failure.
How Will Autonomous Car Insurance Work in Future?
FAQ: Legal Responsibility in Autonomous Vehicle Accidents
Q1: If a Robotaxi crashes, who is legally responsible—the passenger, the platform, or the manufacturer?
A: In most cases, the passenger holds no fault, as they had no control over the vehicle. However, determining whether the platform, vehicle manufacturer, or AI software vendor is liable depends on the cause of the malfunction and the local laws. In 2025, joint liability rulings are becoming more common in U.S. and EU courts.
Q2: Can Robotaxi platforms be sued like traditional taxi companies?
A: That’s still a legal grey area. If the platform handles dispatching, pricing, and user services, some regulators—like the U.S. NHTSA—argue it should be treated like a traditional transportation provider. But most Robotaxi firms classify themselves as tech facilitators, distancing themselves from direct liability.
Q3: What happens if the Robotaxi’s algorithm causes the accident?
A: If AI misjudgment is involved, the liability may shift to the software provider (like Waymo, Mobileye, or Baidu Apollo). However, courts often require detailed logs to prove this, and many companies cite “proprietary technology” to withhold that data. This makes proving fault extremely challenging.
Q4: Is the passenger protected by insurance during a Robotaxi ride?
A: That depends on the local jurisdiction and the platform’s insurance policy. Some countries now require mandatory accident coverage for autonomous rides, but enforcement varies. In Europe, for instance, riders are typically covered under platform liability insurance, but in parts of the U.S., it’s still optional.
Q5: Can I access data logs or video footage if I’m injured in a Robotaxi?
A: Legally, you may request it—especially during litigation—but Robotaxi companies rarely release full logs voluntarily. Efforts are underway in the EU and California to mandate data transparency and independent audits, but full access remains a challenge in most regions.