The Reasonable Robot: Artificial Intelligence and the Law, a new book by Ryan Abbott, MD, JD, MTOM, PhD, takes on a topic that is surely coming to a courtroom and legislative session near you. Though it’s written for lawyers and policy makers, the book is accessible to the general reader. It uses relatable examples from real life to explore how Artificial Intelligence (AI) is changing society, business and government to the point where the law will have to adapt to accommodate it.
Abbott approaches the issue from a variety of perspectives, including liability and torts, criminal law, intellectual property and taxation. In each area, he argues that AI is not on a “level playing field” with human beings in the eyes of the law. This begs the question, though, “Should AI be on the same level as human beings?” Should “an AI,” as he calls the hypothetical, human-like machine-based intelligence, be held to a different legal standard than actual humans?
It’s a good question, and even if you think you know the answer, it’s an issue that is definitely going to be getting a lot of attention in coming years. For example, if an AI, such as one that powers a self-driving car, causes an accident, should the courts treat it (the AI, not the car) as if it were a human driver? That way, the AI’s “conduct” would be viewed from the perspective of the “reasonable man” standard, versus being treated merely as a defective product. The latter standard would result in much higher and less flexible financial judgments against the AI and its maker.
Abbott also examines how AI inadvertently creates tax incentives for automation, which drives job loss. He highlights AI’s role as a criminal actor and its potential to create new inventions and artistic works. As a patent attorney, Abbott is actually in the process of trying to get a patent registered to an AI, rather than a human inventor in the UK.
It’s a readable book, though it will likely only be relevant to people with a strong interest in the topic. However, Abbott, who is a Professor of Law and Health Sciences at the University of Surrey School of Law and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA, has done the legal and policy fields a significant service with this book. He methodically lays out the contours of the complex legal and philosophical arguments that will go into addressing the underlying issues.
It can be a slightly frustrating subject, however. For one thing, the AI entity he envisions is not quite here yet. Most uses of AI, impressive as they may be, relate to deployments of extremely deep but narrow cognitive technologies, e.g. analysis of MRI scans to detect tumors. Such a computer program (and that’s really all AI is) can be an outstanding diagnostician, but it doesn’t know the difference between a loaf of bread and the Mona Lisa.
Also, certain issues are moot, even if they’re interesting to discuss. AI is used to commit crimes, for instance. That is already a big problem and will only grow more serious over time. However, the perpetrators will never be held to account. They hide behind anonymous machines and foreign countries that lack extradition. The resulting legal arguments about AI’s criminal conduct are academic.
As Abbott reveals, though, a discussion of AI’s legal rights quickly takes the reader into some grand themes, like what is the nature of the law itself? What is intelligence? If an artificial “person” in the form of a corporation has legal rights and responsibility, then why can’t an AI have such rights and responsibilities?
Your opinion of the book will probably depend on whether you view AI as a product that’s owned by a company or an independent life form. If AI is a product, then its owners are responsible for whatever it does—even if they can’t control it. In that sense, AI is no different from the wild ox discussed in Talmudic law 2,000 years ago. Who is responsible if it goes on a rampage? These are not new issues. AI revivifies them.