Objectivity
Clarity and structure
Dramatic flair
A Scenario Without an Answer
Vienna, 2027. A Tesla autonomous taxi hits a cyclist at an intersection. The person dies. Police arrive at the scene and face a question for which there is currently no answer: whom to arrest? The cabin is empty. There is no driver. There is only an algorithm that decided not to brake.
This is not science fiction. It is a mathematical inevitability of the coming decade.
According to European Commission data, by 2030, about 12 million autonomous vehicles will be circulating on EU roads. In the US, this figure could reach 20 million by 2035. Each of them is a potential source of legal collapse.
Because our legal system is built on one simple idea: a human sits behind the wheel, makes decisions, and bears responsibility for them. But what happens when there is no one behind the wheel?
Old System, New Problem
Modern law is based on the concept of guilt. To punish someone, three things must be proven:
- The action was committed
- The action led to damage
- The person acted with intent or negligence
With unmanned systems, this logic breaks down at the third point. An algorithm cannot be negligent. It cannot act with intent. It simply executes code.
Legal practice in Austria, Germany, and most European countries requires establishing the subject of the offense – a natural or legal person. But who is this subject in the case of autonomous transport?
The car manufacturer? The algorithm developer? The vehicle owner? The company that trained the neural network? Or perhaps the engineer who last updated the software?
In 2018, an Uber self-driving car hit a pedestrian in Arizona. The woman died. The court found the operator guilty, who was sitting behind the wheel in case of an emergency and should have intervened. But what happens when there is no operator at all?
Three Liability Allocation Models
Lawyers around the world are proposing different solutions to the problem. Three main models can be distinguished.
Model 1: Manufacturer Liability
The most obvious option is to hold the car manufacturer responsible for all actions of their product. The logic is simple: if a company released a system onto the road that makes decisions instead of a human, it must bear full responsibility for those decisions.
This model already partially works in the EU through the Product Liability Directive. If your refrigerator explodes and causes a death, the manufacturer is liable. Why should it be different with a car?
The problem is scale. Automakers argue: if they are to answer for every accident, autonomous transport will never hit the market. Potential lawsuits could reach billions of euros. No insurance company will agree to cover such risks.
Furthermore, the manufacturer does not control everything. Algorithms are trained on data collected by other companies. Software is constantly updated. Sensors are produced by a third party. Where does the responsibility of one end and the responsibility of another begin?
Model 2: Owner Liability
An alternative approach is to hold the vehicle owner responsible, as it currently works with regular cars. You bought the car – you are responsible for what it does on the road.
Germany is partially moving in this direction. In 2021, the Bundestag passed a law allowing Level 4 autonomous cars (full autonomy under certain conditions) on public roads. The owner is required to have insurance that covers the actions of the autonomous system.
It sounds reasonable until you start thinking about the details. The owner did not program the algorithm. The owner did not decide to turn left or right. The owner might not have even been in the car at the moment of the incident.
Moreover: if an autonomous taxi belongs to a company like Uber, and a passenger is sitting inside who simply ordered a ride, who is at fault? The company? The passenger? Both?
Model 3: Distributed Liability
The third option is to admit that responsibility must be distributed among all participants in the chain: the car manufacturer, the software developer, the owner, the insurance company, and possibly the state that certified the system.
This model reflects reality better but creates legal chaos. Every case turns into a multi-year proceeding with dozens of defendants. Who is 10% at fault, and who is 40%? How do you even calculate that?
The Black Box Problem
There is another complication rarely spoken of: modern neural networks are black boxes. Even developers often cannot explain why the system made a specific decision.
Imagine a court hearing. The prosecutor asks: «Why did your car not brake?» The engineer replies: «The neural network analyzed 47,000 parameters in 0.3 seconds and decided that braking would increase the probability of a collision with another object.» Prosecutor: «With which object?» Engineer: «We do not know. The system does not record intermediate calculations.»
This is not hypothetical. Most deep neural networks work exactly like this. They produce a result but do not explain the logic. In scientific literature, this is called the AI interpretability problem.
The European Union is trying to solve this problem through regulation. In 2024, the AI Act entered into force – the world’s first comprehensive law on artificial intelligence. One of the requirements: high-risk systems (including autonomous transport) must be «transparent and explainable».
It sounds good on paper. In practice, it is unclear how to achieve this technically. Creating fully interpretable neural networks is an unsolved scientific task.
The Insurance Question
Insurance companies are nervous. The traditional auto insurance model is based on assessing driver risk: age, experience, accident history. But if there is no driver, how do you assess risk?
Swiss Re, one of the world’s largest reinsurers, published a report in 2023 on the risks of autonomous transport. The main conclusion: existing models for calculating insurance premiums are inapplicable to unmanned systems.
New approaches are being proposed. For example, insuring not the owner, but the specific car model and software version. If a 2028 Toyota Camry with software version 3.2.4 demonstrates statistically low accident rates, the insurance is cheaper. If high – it is more expensive.
But this creates a vicious circle. To gather statistics, thousands of cars must be released onto the roads. To release them onto the roads, insurance is needed. To get insurance, statistics are needed.
Some experts propose a radical option: a state insurance fund for autonomous transport. All manufacturers pay contributions from which losses are covered. Effectively – a socialization of the risks of technological progress.
The Ethical Algorithm
Now – the main question. Suppose the legal problem is resolved. The manufacturer bears responsibility, insurance exists, transparency is ensured. One thing remains: how should the algorithm make decisions in a critical situation?
A classic thought experiment: an autonomous car is moving along a narrow road. A child runs out onto the roadway. If the car continues moving, it will hit the child. If it swerves sharply to the right, it will crash into a wall and kill the passenger. What should the system choose?
In philosophy, this is called the «trolley problem». In engineering – «ethical AI programming».
MIT conducted a global survey called the Moral Machine. Participants were asked to solve dozens of such dilemmas. The result: there is no universal consensus. In some cultures, the young are saved at the expense of the elderly. In others – the reverse. Somewhere the life of a pedestrian is more important than the life of a passenger. Somewhere – the reverse.
Who decides what ethics to hardcode into the algorithm? The manufacturer? The state? An international commission?
And here is what is truly important: this choice has already been made. Algorithms are already working. They are already programmed with certain priorities. But no one publicly discloses which ones.
Tesla does not publish the ethical matrix of its autopilot. Waymo – neither. This is a commercial secret. You can buy a car without knowing whose life it will prefer in a critical situation: yours or someone else's.
Case Study: Legislation in Different Jurisdictions
Germany has advanced further than anyone. In 2017, the Ethics Commission for Automated and Connected Driving published 20 principles mandatory for all unmanned systems on German roads.
Key provisions:
- Human life always has priority over animals and material property
- The system cannot distinguish people by age, gender, or other characteristics
- In an unavoidable accident, a random distribution of risks is admissible
The last point is particularly notable. Effectively, German legislation allows the algorithm to choose randomly: «I do not know whom to save, so I choose randomly». Morally, this can be justified. Legally – questions remain.
Japan chose a different approach. Instead of rigid ethical requirements for algorithms, the government focuses on technical reliability. The idea is simple: if the system works flawlessly, ethical dilemmas practically do not arise. Naive, but statistics show: Japanese autonomous taxis demonstrate one of the lowest accident rates in the world.
In the US, the approach is fragmented. Each state adopts its own norms. California requires the mandatory presence of a safety operator. Arizona permits full autonomy. Texas allows testing without special permits. The result is a legal mosaic.
The Update Problem
Another level of complexity is updates. Autonomous car software changes constantly. Tesla releases updates every few weeks. Each update changes system behavior.
Imagine: a car with software version 2.8 gets into an accident. Investigation shows that in version 2.7, such a scenario was handled correctly. In version 2.8, the algorithm was changed, which led to the accident.
Who is at fault? The manufacturer who released the update? The owner who installed it? Or the owner who did not install it, even though the manufacturer recommended it?
In aviation, this problem is solved through rigid certification. Every change in aircraft software undergoes a multi-year review. For the automotive industry, such an approach is unrealistic – it evolves too quickly.
Figures and Forecasts
Let us return to the math. According to RAND Corporation estimates, statistically significant proof of autonomous car safety requires about 17 billion kilometers of driving without serious incidents. This is more than the entire accumulated mileage of all autonomous systems over the last decade.
In other words: the technology is hitting the roads without a sufficient volume of data on its real safety. This is a conscious risk, justified by the fact that even imperfect unmanned systems are likely safer than average human drivers.
In the EU, about 25,000 people die in road accidents annually. Approximately 90% of accidents are linked to the human factor: speeding, drunk driving, distraction. Theoretically, autonomous cars can reduce this figure by 80–90%.
But «theoretically» is the key word. Practice shows a more complex dynamic. Autonomous systems cope excellently with predictable scenarios but handle rare edge cases poorly. And it is precisely these that often lead to severe consequences.
Prospects for a Solution
The International Organization for Standardization (ISO) is working on the ISO 21448 standard, which outlines safety requirements for autonomous systems. But the standard defines testing procedures, not liability issues.
The UN, through the World Forum for Harmonization of Vehicle Regulations, is trying to form unified international norms. The process is slow. Consensus between dozens of countries with different legal systems is a task for years.
Meanwhile, technology does not wait. China plans to launch fully autonomous taxis in 20 cities by 2026. Tesla promises a «robotaxi» without a steering wheel and pedals by 2026–2027. Every month, new startups appear, promising a revolution in transport.
The reality is this: the legal system will adapt post facto. First, resonant cases will appear. Then – court trials. Then, years later – precedents. Then they will start writing new laws.
This is inefficient. This is not optimal. But this is exactly how the law works: reactively, not proactively.
What Comes Next
By 2035, autonomous cars will become commonplace in most developed countries. We will get used to cars without drivers. We will adapt to new risks just as we adapted to all previous technological changes.
But the question of responsibility will not disappear. It will become sharper with the appearance not only of cars but also of delivery drones, autonomous trucks, and unmanned buses.
With high probability, a hybrid model will form: manufacturers bear the primary responsibility, owners are required to have insurance, and the state creates a compensation fund for extreme cases. Plus mandatory algorithm certification and regular update audits.
Ideal? No. Workable? Probably.
The main conclusion is simple: technology evolves faster than society manages to comprehend its consequences. Autonomous cars are just the beginning. Next – autonomous medical systems making treatment decisions. Hiring algorithms influencing the careers of millions. AI judges passing sentences.
And in each such case, we will again ask the same question: who answers when the decision is made by a machine?
Here are the data. Here are the trends. Here is what follows.
And you thought the future would be simpler?