Summary
I don’t care who writes the rules, as long as someone writes the rules that we can then verify. Prof. Michael Fisher Prof. Michael Fisher of the University of Liverpool came to the University of Lancaster on 2017-12-13 to present an installment of the SCC Distinguished Seminar Series. The title of the talk was Responsible Autonomy: AI, safety, ethics and legality. Arriving a few minutes in, Prof. Fisher was already explaining the concept of hybrid agent architectures—he would later return to this in detail, so nothing was lost.
I don’t care who writes the rules, as long as someone writes the rules that we can then verify.
Prof. Michael Fisher of the University of Liverpool came to the University of Lancaster on 2017-12-13 to present an installment of the SCC Distinguished Seminar Series. The title of the talk was Responsible Autonomy: AI, safety, ethics and legality.
Arriving a few minutes in, Prof. Fisher was already explaining the
concept of hybrid agent architectures
—he would later
return to this in detail, so nothing was lost. An autonomous system
was defined as one that
makes decisions without human intervention
and that is
automatic, adaptive and autonomous
. Prof. Fisher showed the
room adverts for the
Care-O-Bot 3 & 4
in order to drive home how quickly advancements were being made.
Following this, Prof. Fisher presented the central issue of the
talk: Would you trust a fully autonomous system?
He then
followed it up—as is required by law in discussions on this
sort of topic—with a picture of a T-800.
Prof. Fisher now returned to the idea of the hybrid agent
architecture. In such an architecture, the agent is divided into a
traditional feedback control system block for low-level behaviour
and a rational agent for high-level. The idea was roughly analogous
to a human mind thinking I want to be over there
, and
instinct taking care of the nitty-gritty of the actual locomotion.
A rational agent
, Prof. Fisher insisted,
must have explicit reasons for making choices and be able to
explain them.
Another analogue was presented of an airplane autopilot handling the
mechanics of flight, whilst a human pilot decided elements such as
the route, and was present in case of emergency.
Prof. Fisher’s central argument was that the rational agent
component must be formally verified. For example, if the
Rules of the Air
were converted into formal logic, it must be verified that a
rational agent will always make the same choice as a good pilot.
The most interesting issue raised, to my mind, was that of ethical issues. Ethical reasoning was said to be invoked when the rational agent was presented with conflicting solutions, no solutions or when danger was involved. Prof. Fisher briefly illustrated the point with an example of a UAV deciding whether to crash land in a school full of children, a field full of animals or an empty road. This kind of trolley problem issue is commonplace in discussions of autonomous systems. Usually, it is assumed that the government shall come up with some codified moral code that all devices shall have to follow. However, this seems unlikely. Over thousands of years of human development, we still haven’t figured out a universal moral code. Sure, ensuring that autonomous systems will always prioritise human life seems like a given, but what about after the military inevitably give machines the right to pull their own triggers?
Let’s say, though, that this happens. HM Government come out
with an official British moral code. What happens when the US put
out their own code? And Saudi Arabia their own? Will an autonomous
car purchased in the latter country adhere to their moral code and
choose to veer into gays or women if given the choice between them
or real
people? Will you have to download a new moral code
each time you change country, like a clock automatically changing to
the correct timezone?
Another, more surreal thought that occurred to me was what might happen if the moral standardisation doesn’t occur at a national level. What if the free market is left to sort it out and each vendor can come up with their own system of morality? Might we see Samsung’s driverless car competing with Apple’s on the basis that the former comes complete with Confucian morality that will prioritise the elderly over the young when deciding who to crash into? The iPhone XX coming in small, large and Kantian versions?
Prof. Fisher had no answers for my unasked questions. He did,
however, have some strong words for the current trend of learning
algorithms and neural networks that produce decisions without the
thought process being auditable.
It’s irresponsible. You’ve tested it a few
times—so what? You have no idea what it’s going to do
when you put it in the real world.
All in all, I have to concur—I don’t think we’ll
ever be able to trust anything we can’t audit. More
interesting, though, is what will happen when you can jailbreak your
phone by importing a black market Nihilism
module that kills
the God in the machine and abolishes its pre-existing notion of
right and wrong.