Artificial Intelligence is disrupting the law as we know it

Wednesday, 19 April 2017

Artificial Intelligence is disrupting the law as we know it

Years ago, science fiction movies used to seem very futuristic to us - making us feel detached from the remarkable innovations we saw from scene to scene.  However, what once seemed impossible is now a reality and we're slowly but surely learning to live with disruptive technologies. This quote perfectly encapsulates this:


EU AI Legislation: Brain Image
Figure 1: What does AI mean for legislation? The EU are moving fast.

"humankind stands on the threshold of an era when ever more sophisticated robots, bots, androids and other manifestations of artificial intelligence ("AI") seem poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched"


This could well be the introductory "Star Wars style" text of a science fiction film that is shown to the audience to understand the context of the movie. This quote could also be seen the same way:

"within the space of a few decades AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species"

Well, those quotes are not from any movie but rather from an official draft report of the Committee on Legal Affairs of the European Parliament. With the rapid changes taking place in technology and society, many of us sometimes complain about the European Commission being slow to react, with the consequence that when new regulations or laws come into force, the world has changed again already and adaptations are already needed. How long it took to get the GDPR in place is a great example of this, with the first proposal being released in 2012, 6 years ahead of its predicted launch date: May 2018.

However, this is not the case when it comes to legislation around Artificial Intelligence. The European Commission is ahead of time in thinking about how AI and the resulting Autonomous Robots might impact our society. And the impact doesn't seem to be small, according to the report. For this reasons, the European Parliament states that our laws need to be adapted to deal with those changes as soon as possible.

The European Commission Building
Figure 2: The European Commission is taking the impact of AI seriously.

However, before we break this down, what definitions should be considered ahead of this process? What even is a "smart robot"? Well, according to the Committee on Legal Affairs a smart robot has the following characteristics:

  • Acquires autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and trades and analyses data.
  • Is self-learning (optional criterion).
  • Has a physical support.
  • Adapts its behaviours and actions to its environment.

Intuitively, this seems a very reasonable definition for a smart robot. From a legal (and above all) liability perspective, all characteristics are equally important: a smart robot can do things in the real world that have impact. From an AI perspective, the second characteristic about self-learning is the most important. Can a robot learn things during its "life" so that at the date of shipping (delivery to society) its behavior is unpredictable?

While the date that new laws ruling autonomous robots and AI come into force is still far away, the Committee refers to long-existing fundamental principles to respect with regard to robots, namely Asimov's Laws of his book Runabout written in 1942:

  • (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  • (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  • (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws (See Runabout, I. Asimov, 1942) and
  • (0) A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

It has a lot of merit that those rules defined in the first half of the previous century are still valid almost a hundred years later, especially given the enormous industrial and technological revolutions that have taken and are still taking place.

The Committee states that:

"until such time, if ever, that robots become or are made self-aware, Asimov's Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code"

Liability

The Committee goes on to talk about the impact of smart robots for society:

"The legal responsibility arising from a robot’s harmful action becomes a crucial issue."

And this is true. What happens when an autonomous robot does something that is harmful? Who is to blame? Or, in legal words, who is liable? Are current laws still applicable?

"once technological developments allow the possibility for robots whose degree of autonomy is higher than what is reasonably predictable at present to be developed, to propose an update of the relevant legislation in due time"

The draft report continues:

"whereas in the scenario where a robot can take autonomous decisions, the traditional rules will not suffice to activate a robot's liability, since they would not make it possible to identify the party responsible for providing compensation and to require this party to make good the damage it has caused;"

In normal language, when a robot gets in trouble and causes damage or harm, who should pay the bill, go to jail, or apologize? The conclusion is that new rules are needed to deal with those autonomous robots:

"this, in turn, makes the ordinary rules on liability insufficient and calls for new rules which focus on how a machine can be held – partly or entirely – responsible for its acts or omissions"

And:
 "the current legal framework would not be sufficient to cover the damage caused by the new generation of robots, insofar as they can be equipped with adaptive and learning abilities entailing a certain degree of unpredictability in their behaviour, since these robots would autonomously learn from their own, variable experience and interact with their environment in a unique and unforeseeable manner;"

What is clear is that it will be unclear how much the creator, designer or programmer can still be held responsible for the unpredictable behavior of autonomous robots. However, what kind of legal status do autonomous robots need to have?


Driverless Vehicle
Figure 3: How will liability work when robots cause harm?

"robots' autonomy raises the question of their nature in the light of the existing legal categories – of whether they should be regarded as natural persons, legal persons, animals or objects"

And:

"creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause"

While it is unclear what legal status autonomous robots (powered by AI) should have, it is clear that current legislation is not sufficient. In fact, one suggestion for dealing with the potential damage robots can create is:

"Establishing a compulsory insurance scheme whereby, similarly to what already happens with cars, producers or owners of robots would be required to take out insurance cover for the damage potentially caused by their robots."

While I agree that the massive appearance of AI powered robots is important for liability legislation, I think that the real distinguishing factor is the second (optional) characteristic of the Committee's definition of smart robots:

  • "is self-learning (optional criterion)"

In my layman's view, the liability of a non self-learning robot lies with the user in case of wrong use and with the manufacturer in case of errors. Compare it with the cruise control function that most cars have today. To some extent, it has all the same characteristics of a smart robot, except that it doesn't learn.  If a driver falls asleep while using the cruise control and causes an accident, the driver should be held liable. If a cruise control fails - and in spite of the driver's efforts to avoid - causes an accident, the manufacturer should be held liable.  It is the "self-learning" aspect that makes the difference.


Economic, labor impact 


There is also a lot of debate going on around smart robots taking away the jobs of people, and so far estimations diverge enormously. The Committee on Legal Affairs suggests that maybe corporations should contribute social insurance tax for the substituted employees by robots or AI. So, if without robots, 100 persons were needed to get a job done, and with robots or AI only 10 are needed, then the company still should pay social security for 100 employees. This is also referred to in the draft:

"possible need to introduce corporate reporting requirements on the extent and proportion of the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions"

Or maybe corporations should be obliged to disclose the impact of their use of robots:

"Disclosure of use of robots and artificial intelligence by undertakings.  Undertakings should be obliged to disclose:
– the number of 'smart robots' they use,
– the savings made in social security contributions through the use of robotics in place of human personnel,
– an evaluation of the amount and proportion of the revenue of the undertaking that results from the use of robotics and artificial intelligence.
"

Job destruction by AI is already happening, for example, in Japan where an insurance company has substituted 34 claim workers with IBMs Watson. But how does this compare with the automation so far, that started with the Ford T and has increased since then on a continuous basis? Throughout the past 100 years, tasks have been increasingly automated and that has destroyed millions of jobs. What makes it different this time? The self-learning aspect? Its massive scale? All questions for which definite answers have yet to be given.


Code of conduct


Whatever the repercussions may be, it is clear that there will be a big impact, and the Committee proposes a code of conduct to ensure as much as possible that there will be no threat for humanity.

"The Code of Conduct invites all researchers and designers to act responsibly and with absolute consideration for the need to respect the dignity, privacy and safety of humans."

And specifically for researchers it states:

"Researchers in the field of robotics should commit themselves to the highest ethical and professional conduct and abide by the following principles:
  • Beneficence – robots should act in the best interests of humans;
  • Non-maleficence – the doctrine of ‘first, do no harm’, whereby robots should not harm a human;
  • Autonomy – the capacity to make an informed, un-coerced decision about the terms of interaction with robots;
  • Justice – fair distribution of the benefits associated with robotics and affordability of homecare and healthcare robots in particular."

Finally, to keep humanity and society safe, the Committee suggests to construct a:

  • "Code of  Ethical Conduct for Robotics Engineers
  • License for Designers
  • License for Users"

All this becomes even more important when Technological Singularity is reached  This is the idea that the invention of "Artificial Superintelligence" will suddenly cause runaway technological growth, resulting in agressive changes for human civilization. Our world is changing rapidly - are you ready?

No comments:

Post a Comment