Official standard for robot ethics released


Monday, 26 September, 2016

Official standard for robot ethics released

Way back in 1942, science fiction author Isaac Asimov introduced what have come to be known as the Three Laws of Robotics (also known as Asimov’s Laws).

The Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These laws depend on an assumption about the inherent nature of robots which until recently would not apply to what we know of as industrial robots — it implies autonomous decision-making capabilities. The robots commonly used in manufacturing since the 1960s would not meet this definition, since they perform only repetitive tasks as programmed by their ‘human masters’, and are inherently dangerous to humans, having killed and injured many over their history. The robots we commonly see in our factories reflect more the original etymology of the word ‘robot’ — coming from a Slavic word robota, which means work, labour or drudgery.

But things are changing. We are rapidly arriving at the era of the autonomous robot — examples are clearly evident with self-driving cars beginning to appear on our roads, and service robots such as food delivery robots at Sydney’s Royal North Shore Hospital (since 2012), or Pepper, a humanoid robot already making its appearance in customer service applications in Europe.

Now we have the first official standard on robot ethics, in the form of British standard BS8611:2016 Robots and robotic devices — Guide to the ethical design and application of robots and robotic systems.

According to a recent article published by The Guardian, the document “is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.”

Food service robots at Sydney’s Royal North Shore Hospital (Image: Sydney Morning Herald)

Food service robots at Sydney’s Royal North Shore Hospital (Image: Sydney Morning Herald)

Alan Winfield, a professor of robotics at the University of the West of England, said they represented “the first step towards embedding ethical values into robotics and AI”.

“As far as I know this is the first published standard for the ethical design of robots,” he said. “It’s a bit more sophisticated than Asimov’s laws — it basically sets out how to do an ethical risk assessment of a robot.”

The standard begins with broad ethical principles:

  • Robots should not be designed solely or primarily to kill or harm humans.
  • Humans, not robots, are the responsible agents.
  • It should be possible to find out who is responsible for any robot and its behaviour.

The code suggests designers should aim for transparency, but scientists say this could prove tricky in practice. “The problem with AI systems right now, especially these deep learning systems, is that it’s impossible to know why they make the decisions they do,” said Winfield.

Deep learning agents, for instance, are not programmed to do a specific task in a set way. Instead, they learn to perform a task by attempting it millions of times until they evolve a successful strategy — sometimes one that its human creators had not anticipated and do not understand.

The guidance even hints at the prospect of sexist or racist robots, warning against “lack of respect for cultural diversity or pluralism”.

Winfield said: “Deep learning systems are quite literally using the whole of the data on the internet to train on, and the problem is that that data is biased.”

It should come as no surprise that using internet data may be biased, or that such systems tend to favour white middle-aged men, and tend to absorb human prejudices.

“We need a black box on robots that can be opened and examined,” said Noel Sharkey, emeritus professor of robotics and AI at the University of Sheffield. “If a robot is being racist, unlike a police officer, we can switch it off and take it off the street.”

The document also flags up broader societal concerns, such as over-dependence on robots. It is expected that when humans work with a machine for a certain length of time and it gives the right answers they come to trust it and become lazy, allowing undesirable or inferior results to occur.

Top image: Softbank’s Pepper robot. (Photo: Koji Sasahara/Associated Press)

Related News

Inspection robot can do parkour and walk across rubble

Researchers in Swizerland have used machine learning to teach the ANYmal industrial inspection...

Open Robotics launches Open Source Robotics Alliance

The Open Source Robotics Alliance is a new initiative to strengthen the governance of open-source...

Universal Robots to debut new cobot solutions at APPEX 2024

UR will be demonstrating its UR30 locally for the first time at APPEX 2024, along with the...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd