AI World

So one day, you are driving to work when you suddenly get distracted by a dog running in the opposite direction.  You briefly turn your head to observe the dog, not realizing that your car is about to enter an intersection on a red light.  Before you have time to react, the deafening sound of a collision rings in your ears as your car gets t-boned on the passenger side by oncoming traffic (that had the right of way with a green light).

This should be a clear cut legal case right?  The person who entered the intersection on a red light is liable for the accident.  Ordinarily, that would be the correct conclusion.  But what happens if the car that wrongly entered the intersection was not being driven at all?  What if that car was self-driving?  Who is liable in those circumstances?

Liability Question

As we enter into a new age where artificial intelligence (AI) governs more of our activities, the question of who is liable for what will get even murkier.  How this unfolds in the future will not be evident in the here and now. We can be certain that the lawyer representing the person sitting in the self-driving car (in our hypothetical example) would argue that the manufacturer of the vehicle was solely responsible, under a product liability claim.  The manufacturer, who would be wise to install a “manual override” button in all of its vehicles, will argue that the driver was responsible for not taking control of the car before entering the intersection.  The driver would argue that it was impossible for human to take control in the split seconds between the time the AI failed and the time the car entered the intersection.



How about further in the future, when artificial intelligence is so advanced that it takes on humanlike decision-making capabilities?  Those self-driving cars may not even have a manual override.  Will there be laws that hold the actual AI responsible for its actions?  If so, how will those laws be enforced by humans against machines that are potentially more powerful than the lawmakers?

Who Lives?

Even today with the deployment of only basic AI, major ethical issues are being confronted.  There will soon be a day when self-driving cars will have to decide who lives and who dies in a potential accident.  Today, if someone jumps in front of a moving car, a driver will usually react by trying to swerve the vehicle out of the current path.  But that instantaneous move may put other bystanders at risk (bystanders that a human driver may not even have seen).

(Scroll down to read more.)



Artificial intelligence will be able to see those bystanders.  The AI can also react fast enough that it will have to choose between plowing down the person who jumped in front of the car, or swerving into a group of bystanders in order to save the jumper’s life.  In other words, artificial intelligence will soon be making life or death decisions.  With this type of computer programming, it’s no longer just a video game – there are real life consequences that will be encountered.


Economic disruptions

In the short term, policy makers are focusing on the economic effects of mass AI use.  When self-driving vehicles become commercially safe, the livelihood of 3.5 million truckers will be at risk.  What happens if AI can make self-driving trucks a reality, and all those drivers find themselves unemployed?  As automation eliminates more jobs, coming up with ways for displaced workers to generate income becomes a pressing issue.  This is particularly important from a legal perspective, as high rates of unemployment correlate with more crime and societal breakdown.  The use of artificial intelligence is now a reality, but the problems that AI will generate in the future have just begun.

VirgoLaw. We Entertain You With The Law!
Home | The Boutique