Meet the real-life John Connor

As technology progresses, it tends to unleash unforeseen horrors. The same breakthroughs that give us ploughshares also give us swords. That was literally true in pre-history, when early metallurgy’s inexpensive swords unleashed an unprecedented wave of violence throughout early civilisations. It was also metaphorically true on many other occasions – such as when industrialised production gave us World War I – facilitated by machine guns and poison gas.

Will AI open a similar can of worms for humanity?

Sci-fi offers a variety of visions for the artificial intelligence (AI) future, ranging from the benign to the terrifying. The Terminator films probably best represent our collective fear of rogue AI. We’ve all grown up with the nightmare of terrifying humanoid drones attempting to take over the world – with only Sarah and John Connor to save us from annihilation.

Today, we’re talking to a man whose real-life work is to ensure that the evil machines don’t win. Dr Richard Jennings is an expert on the ethics of AI. He’s an affiliated scholar at the Department of History and Philosophy of Science at the University of Cambridge. I caught up with him at the university’s CUTEC Technology Ventures Conference on AI.

AL: Shahar Avin’s conference talk really helped me get to grips with the problem. He introduced Nick Bostrom’s “paperclip maximiser”. Can you explain that?

RJ: The paperclip maximiser is a “thought experiment”, to illustrate the dangers of rogue AI. The idea is that an entrepreneur tasks an AI with maximising the production of paperclips. He imagines that the AI might achieve this seemingly innocuous goal by moving machines around in the factory. But the AI has no idea of context – and in the pursuit of maximising paperclip production proceeds to eliminate the human race and expand throughout the universe.

AL: Isn’t that a bit silly?

RJ: Quite possibly – but it certainly helps draw people’s attention to the potential for things to go disastrously wrong if we don’t look carefully at all the consequences of how we task AI.

AL: IT ethics is a field many people won’t have considered. What got you into it?

RJ: I’m a philosopher of science. In the late 1970s I began lecturing at the University of Cambridge Department of History and Philosophy of Science. In the late 1980s I developed an interest in ethical issues in science. Early in the 1990s, I began lecturing on science ethics to science students. A few years after that I was asked to lecture Computer Sciences students on IT Professional Practice and Ethics. The Cambridge Computer Sciences degree is accredited by the British Computer Society (BCS), which is the Chartered Institute for IT. On their next accreditation visit, I met with members of the team – and I was subsequently asked to join their Ethics Forum. I served until around 2012, and I coordinated the group which developed the current BCS Code of Conduct.

AL: How does your work impact on the risk of “rogue AI”?

RJ: The BCS is a professional body, so it is concerned to maintain high standards of professional activity. Much of this is focused on the quality of the work done, and the conduct of the members. But the first area addressed by its Code of Conduct is public interest. The first public interest concern is that members should “have due regard for public health, privacy, security and wellbeing of others and the environment.” Preventing rogue AI falls pretty squarely into that!

AL: We’re very used to hubristic media coverage of the benefits of tech. Do you find it hard to get people to take the risks seriously?

RJ: IT, like science in general, is a double-edged sword. It can be used for bad as well as good. What we heard at the conference all seemed pretty good, but members of the audience quite rightly raised questions about the downsides. This logically leads to questions about IT being subject to ethical regulation – and, if so, who should manage the regulation. Since IT is largely being developed by industry, that would suggest that industry is the de facto regulator.

AL: Is industry able to engage in ethical regulation?

RJ: It can be argued that industry is more concerned with profit than with ethical issues. Many would claim that industry is unable to regulate itself. For example, The Corporation film argues that corporations are essentially psychopaths.

AL: So, who should be responsible for regulating AI technology?

RJ: Maybe professional organisations, like BCS, are in the best position to regulate the industry, or perhaps transnational governments like the UN and the EU. One of the last projects I was involved in at the BCS Ethics Forum was the creation of a methodology for the assessment of new and emerging technologies. Two of our main starting points were the United Nation’s Universal Declaration of Human Rights and the Charter of Fundamental Rights of the European Union. The values embodied in these two documents are also embodied in the BCS Code of Conduct.

AL: Are governments capable of regulating IT?

RJ: It depends on the government – there are totalitarian governments, weak and wobbly governments (eg, present-day UK), governments which represent corporate interests (eg, the current US government), and transnational governments (eg, the UN and the EU).

AL: But can these abstract discussions prevent potential dangers resulting from IT?

RJ: I think so. After all, the potential dangers of AI and robotics was anticipated years ago by Isaac Asimov. In a 1942 sci-fi story he introduced the Three Laws of Robotics, the first of which was: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” If we are going to develop AI that interacts with humans, we must certainly build into our programme the concept of human injury or harm. Then of course we need to build into the programme that humans are not to be injured or harmed. In the second keynote address given to the conference, Hermann Hauser argued that humans and AI will co-evolve. I am sure that part of that co-evolution will involve the preservation, indeed, the benefit, of human beings.


That’s an optimistic conclusion. Do you welcome our new robot overlords? andrew@southbankresearch.com.

Best,

Andrew Lockley
Exponential Investor
 

Related Articles:

 

Category: Artificial Intelligence

From time to time we may tell you about regulated products issued by Southbank Investment Research Limited. With these products your capital is at risk. You can lose some or all of your investment, so never risk more than you can afford to lose. Seek independent advice if you are unsure of the suitability of any investment. Southbank Investment Research Limited is authorised and regulated by the Financial Conduct Authority. FCA No 706697. https://register.fca.org.uk/.

© 2017 Southbank Investment Research Ltd. Registered in England and Wales No 9539630. VAT No GB629 7287 94. Registered Office: 2nd Floor, Crowne House, 56-58 Southwark Street, London, SE1 1UN.

Privacy & cookie policy | Terms and conditions | FAQ | Contact Us | Top ↑