Superintelligence – Why we need to be super careful with AI

Most of you will have heard of Elon Musk. He’s the South African-born billionaire behind the Tesla electric car, the top investor in the US’s largest provider of rooftop solar power, and the owner of a private rocket company, among other pursuits.

Musk made major waves in a question-and-answer session at the MIT Aeronautics and Astronautics Department’s Centennial Symposium in October 2014. Asked to share his thoughts on artificial intelligence, he replied:

If I were to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean, with artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water, and he’s like, yeah, he’s sure he can control the demon. Didn’t work out!

A couple of months earlier, in an August 2014 tweet, Musk issued similar warnings, having read a book on the subject by Oxford University’s Nick Bostrom:

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.

More dangerous than nukes, no less!

Musk was not the only science luminary to voice major reservations about AI. Britain’s own Professor Stephen Hawking, in a December 2014 BBC interview, warned:

The primitive forms of artificial intelligence we already have have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence that would take off on its own and redesign itself at an ever-increasing rate, humans who are limited by slow biological evolution couldn’t compete and would be superseded.

The following month, Bill Gates chimed in with objections of his own. During a Reddit question-and-answer session in January 2015, he was asked how much of an “existential threat” machine super-intelligence would be. Here was his answer:

I am in the camp that is concerned about superintelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

What is  “Superintelligence”

To understand what all the fear and fuss is about, it’s best to go straight to the man who triggered the debate in the first place: Nick Bostrom, a Swedish philosopher who heads the Future of Humanity Institute at the University of Oxford and is the author of the book Superintelligence: Paths, Dangers, Strategies.

Essentially, the book is one big warning: one day, technological progress could produce a supercomputer with a broad intelligence matching the human brain. Left to its own devices, that machine could actually turn against the human race.

Bostrom spells it all out quite clearly in the preface:

Inside your cranium is the thing that does the reading. This thing, the human brain, has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that we owe our dominant position on the planet. Other animals have stronger muscles and sharper claws, but we have cleverer brains. Our modest advantage in general intelligence has led us to develop language, technology and social organization. The advantage has compounded over time, as each generation has built on the achievements of its predecessors.

If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence.

We do have one advantage: we get to build the stuff. In principle, we could build a kind of superintelligence that would protect human values. We would certainly have strong reason to do so. In practice, the control problem – the problem of how to control what the superintelligence would do – looks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.

In this book, I try to understand the challenge presented by the prospect of superintelligence, and how we might best respond. This is quite possibly the most important and most daunting challenge humanity has ever faced. And – whether we succeed or fail – it is probably the last challenge we will ever face.

Further along in the book, Bostrom maps out the ways in which we humans could let our guard down and unwittingly usher in the menace:

Consider the following scenario. Over the coming years and decades, AI systems become gradually more capable and as a consequence find increasing real-world application: they might be used to operate trains, cars, industrial and household robots, and autonomous military vehicles. We may suppose that this automation for the most part has the desired effects, but that the success is punctuated by occasional mishaps – a driverless truck crashes into incoming traffic, a military drone fires at innocent civilians. Investigations reveal the incidents to have been caused by judgment errors by the controlling AIs. Public debate ensues. Some call for tighter oversight and regulation, others emphasize the need for research and better-engineered systems – systems that are smarter and have more common sense, and that are less likely to make tragic mistakes. Amidst the din can perhaps also be heard the shrill voices of doomsayers predicting many kinds of ill and impending catastrophe. Yet the momentum is very much with the growing AI and robotics industries.

So development continues, and progress is made. As the automated navigation systems of cars become smarter, they suffer fewer accidents; and as military robots achieve more precise targeting, they cause less collateral damage. A broad lesson is inferred from these observations of real-world outcomes: the smarter the AI, the safer it is. It is a lesson based on science, data, and statistics, not armchair philosophizing.

Against this backdrop, some group of researchers is beginning to achieve promising results in their work on developing general machine intelligence. The researchers are carefully testing their seed AI in a sandbox environment, and the signs are all good. The AI’s behaviour inspires confidence – increasingly so, as its intelligence is gradually increased.

At that point, any sceptical or questioning voices are likely to be immediately dismissed – at humanity’s peril, warns Bostrom, because a “treacherous turn” could be just around the corner:

We observe here how it could be the case that when dumb, smarter is safer; yet when smarter, smart is more dangerous. There is a kind of pivot point, at which a strategy that has previously worked excellently suddenly starts to backfire. We may call the phenomenon the treacherous turn.

The treacherous turn – While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strong – without warning or provocation – it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final values.

Preventing a “treacherous turn”

To prevent the emergence of a menace to the human race, writes Bostrom, controls must be built into technological progress:

The best path toward the development of beneficial superintelligence is one in which AI developers and AI safety researchers are on the same side – one in which they are indeed, to a considerable extent, the same persons. So I call on all sides to practice patience and restraint, and broad-mindedness, and to engage in direct dialogue and collaboration where possible.

Bostrom then devotes a whole chapter to listing types of pre-emptive controls that can stop artificial intelligence from taking a “treacherous turn”. These operate either by controlling the capability of the system, or by controlling its motivations. Here are the various options – I’m summarising and paraphrasing the descriptions he gives in his book:

Capability Control

Boxing methods: The system is confined in such a way that it can affect the external world only through some restricted, pre-approved channel. This method encompasses physical and informational containment methods:

Physical containment aims to confine the system to a “box,” i.e. to prevent the system form interacting with the external world otherwise than via specific restricted output channels. For extra security, the system might be placed in a metal mesh to prevent it from transmitting radio signals.

Informational containment aims to restrict what information is allowed to exit the box. An obvious method is to bar the system from accessing communication networks.

Incentive methods: Incentive methods involve placing an agent in an environment where it finds instrumental reasons to act in ways that promote the principal’s interests. As an analogy, Bostrom uses the example of a billionaire who uses her fortune to set up a large charitable foundation with bylaws and a board sympathetic to her cause. The foundation would face social pressures to behave appropriately, and an incentive to obey the law lest it be shut down or it would face social pressures ld still face social pressures to behave in a certain way, and an incentive to obey the law lest it be shut down or fined.

Stunting: A method that limits the system’s intellectual faculties or its access to information. This might be done by running the AI on hardware that is slow or short on memory. In the case of a boxed system, information inflow could also be restricted.

Tripwires: Diagnostic tests are performed on the system (possibly without its knowledge) and a mechanism shuts it down if dangerous activity is detected.

Motivation Selection

Motivation selection methods seek to prevent undesirable outcomes by shaping what the superintelligence wants to do. By engineering the agent’s motivation system and its final goals, these methods would produce a superintelligence that would not want to exploit a decisive strategic advantage in a harmful way.

Direct specification: Explicitly formulating a goal or set of rules that will cause even a free-roaming superintelligent AI to act safely and beneficially.

Domesticity: The system is built so that it has modest, non-ambitious goals.

Indirect normativity: The system is set up so that it can discover an appropriate set of values for itself by reference to some implicitly or indirectly formulated criterion.

Augmentation: Rather than attempting to design a motivation system from scratch, we start with a system that already has substantially human or benevolent motivations, and enhance its cognitive capacities to make it superintelligent.

Here’s Bostrom’s conclusion, as laid out in his final chapter:

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.

For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.

Nor can we attain safety by running away, for the blast of an intelligence explosion would bring down the entire firmament. Nor is there a grown-up in sight.

In this situation, any feeling of gee-whiz exhilaration wold be out of place. Consternation and fear would be closer to the mark; but the most appropriate attitude may be a bitter determination to be as competent as we can, much as if we were preparing for a difficult exam that will either realize our dreams or obliterate them.

This is not a prescription of fanaticism. The intelligence explosion might still be many decades off in the future. Moreover, the challenge we face is, in part, to hold on to our humanity: to maintain our groundedness, common sense and good-humoured decency even in the teeth of this most unnatural and inhuman problem. We need to bring all our human resourcefulness to bear on its solution.

Yet let us not lose track of what is globally significant. Through the fog of everyday trivialities, we can perceive – if but dimly – the essential task of our age. In this book, we have attempted to discern a little more feature in what is otherwise still a relatively amorphous and negatively defined vision – one that presents as our principal moral priority (at least from an impersonal and secular perspective) the reduction of existential risk and the attainment of a civilizational trajectory that leads to a compassionate and jubilant use of humanity’s cosmic endowment.

What the experts say

Is Bostrom right, or is he a doom-monger prone to hyperbole? How seriously should we take his words of warning, and those of Elon Musk, Stephen Hawking and Bill Gates?

We put the question to the array of experts, starting with one who works closely with Bostrom in Oxford: Cecilia Tilli, academic project manager with the Oxford Martin Programme on the Impacts of Future Technology.

You’ll remember her from earlier on in this chapter, when she made the distinction between “narrow artificial intelligence” (focused on a specific task) and “general or broad artificial intelligence – the artificial intelligence that could be worrisome in these futuristic scenarios, an AI that do the things that we can do.”

We asked Tilli how soon she expected the world to attain “broad AI”:

That’s very difficult to assess. Our institute did this survey maybe two or three years ago. We got estimates for this kind of AI from different experts. Of course, the estimates vary a lot. We also asked for different confidence intervals. Most people were 90% confident that we would have human level AI by 2100 or something. But then some people have a higher level of confidence that this might happen in 30 years, some people 50. Some people think 300.

I tend to be in the more conservative circle.

Sometimes people in the field get overly optimistic and they just project into the future without foreseeing any problems or obstacles. This, by the way, is a common human cognitive basis. It’s called the planning fallacy. Computer scientists working on these issues, when they see progress, they tend to be overly optimistic about how the progress will continue. They tend to project into the future. I think we should be more awake to the obstacles.

It depends on exactly what we’re thinking about. It might be that what we expect is some kind of general intelligence that we attach more human behaviours, or human characteristics, to. Or it might be that we don’t need something that doesn’t even understand language. But if we get a system that, by using a completely different way of solving problems, achieves the same thing, then we might have the same results as if we were able to create an artificial mind.

For example, the Google driverless car. In around 2000, 2002 and 2004, people were saying this was really difficult to achieve, because it requires common sense. This is where artificial intelligence always fails. One thing is to ask the computer to calculate. The other thing is to ask it to act in an environment.

The problem was that people were thinking, if I have a driverless car, the car would need some kind of common sense. In fact, the way that Google achieved it was by machine learning, data mining – not common sense. It responds very strictly to certain constraints, like do not cross the lines in these situations, stop when there’s something in front, etc. So it’s very, very constrained. And it uses very complex vision. It uses satellite, a lot of things that are actually easy for a machine to use. But it doesn’t use any of the things that maybe a human would use.

Of course, our assessment was wrong about when this was going to come. It’s true, if it had needed common sense, then we wouldn’t have it today. But in the end, the engineers find a way to get that without using common sense.
They find a way to perform the task in a different way than we perform it.

That’s very difficult to foresee. If I say, “we will have artificial fiction writers,” people will say, “No, because you need creativity.” But you can achieve creativity not by creativity, but by combining in some algorithmic way certain aspects of different literatures. You can do that very easily with a computer. So sometimes it depends on how well you can find alternative ways of achieving the same result.

Conscious artificial intelligence, if [it’s ever] possible, is going to be a long time coming. But maybe a system that can make decisions, maybe that requires language understanding, that just requires certain constraints that are amenable to the way in which a computer works: that can come pretty fast without people realising it.

Tilli and her boss Nick Bostrom make it their business to warn against the dangers of artificial intelligence – of letting the genie out of the bottle without avoiding catastrophe and ensuring that it won’t, one day, outsmart humans to the point of becoming an uncontrollable threat. And they certainly have some high-profile backers in Messrs Musk, Hawking and Gates.

Other experts we’ve canvassed – top academics, corporate executives, and asset managers – are less worried about the far-out prospect of singularity than about the nearer-term prospect of economic disaster: of gainfully employed human beings losing their livelihoods to robots and manmade machines (a point that Enlitic’s Jeremy Howard raised earlier in this chapter), and of technology creating a terrifying new wealth breach between the haves and the have-nots.

Professor Joanna Bryson, a reader at Bath University who’s also a visiting fellow at Princeton University, is perplexed by the Musk-Hawking-Gates alarm bells:

I don’t understand the doom mongering. Either they’re having mid-life crises, or they’re just trying to solve things. I actually talked to someone who works with Elon Musk who says he’s seriously concerned about it. I seriously thought he was just jumping on the bandwagon to bring attention to his artificial intelligence.

It’s not that it’s a peril. It’s a threat, and it’s a threat that people are already facing now. We don’t have to look into the future for this.

Genome sequencing used to be done by people with PhDs. Now, you can take a program and do an awful lot of the work that you used to have to hire [people with] PhDs to do – to go and match sequences that are similar to each other in a genome. People with PhDs who were doing that [sort of] thing are out of work.

This is the real threat. The real question isn’t, is artificial intelligence going to destroy the planet – are Arnold Schwarzenegger-like robots coming and deciding that they want to be autonomous and have their own country and that we’re using too much petrol. The threat is that loads and loads of people get made redundant.

Even if we built something exactly like a human brain, which I don’t think we would, we are already building things that are putting some people out of work. The big question is: how do we want to construct our society? How do we deal with that fact?

Erik Brynjolfsson of MIT worries that, one day, those with access to technology will dominate the planet in the same way that the 1% of wealthiest people currently do. While he imagines singularity occurring at some point in the future, he notes:

I think our attention is better spent on what’s already happening today and which will happen in the next five or 10 years. That is a much sharper set of changes around the economics of society and in particular the changes of inequality and jobs and productivity and growth. Those are the things that are affecting us right now.

You could say that if we continue on our current path, a lot of humans will rise up well before the machines do, because a lot of humans don’t feel like they’re getting a proportional benefit from all these technological improvements. So my view is that you can imagine a future world where robots become more a threat to our physical lives, but that’s not today, and it’s not in the next 10 years.

I’m not saying it could never happen, and maybe there should be a small group of people studying that. But the challenge we have right now, which is unmistakable, is that hundreds of millions of people are saying that their incomes are stagnating, and one of the big reasons for that is, they’re not using technology effectively to create shared prosperity. We can and should address that. That’s a policy challenge and the entrepreneurial and educational challenge that’s very real and present today. It’s not hypothetical today.

Predictably, academics, entrepreneurs, and investors who are focused specifically on robots are much less scared of them than everyone else. After all, it’s their bread and butter. And they’re much more cool-headed about the machines of the future, too.

Take Kaspar Althoefer, a professor of robotics at King’s College, London, whose team recently operated for the first time on a human body using a soft surgical robot. His point: that there’s nothing to fear but fear itself – and that anguish over the future of artificial intelligence could actually get in the way of progress. Needless to say, he doesn’t really agree with the Musk, Hawking and Gates’ line of reasoning on so-called singularity:

I find what these colleagues say exaggerated. I’m not afraid of it. I don’t see it happening, not in the short term.

If we were heading towards it, we couldn’t avoid it. What would be the alternative? We would need to stop our research, stop everything, because everything we do could lead us to this point of singularity. All research would have to come to a halt.

We can’t move back to the Stone Age just to make sure that one day we won’t have singularity. I think it would be the wrong approach. We should be relaxed about it. I personally don’t see it coming, but what do I know? I don’t have a crystal ball.

Karen Kharmandarian – lead manager of the Robotics Fund at Pictet Asset Management in Switzerland – is equally unfazed by the prospect of ever-smarter robots:

I was talking to a professor of robotics. For the time being, the trend is towards having algorithms that have the capacity of basic animals. We’re approaching that level. To get from the brain of an animal to the brain of a human will take probably two or three decades. Again, we’re talking about 2050 or 2060 where these devices can have the same type of capacity as a human brain.

That doesn’t mean that you should be worried about these robots taking over the world and displacing people. They’re not terminators. I don’t think these types of dystopian views are something we should be worried about.

Besides, concludes Colin Angle – CEO of iRobot, maker of the Roomba vacuum cleaner – why would human beings create a machine that they couldn’t control?

There’s lots of good science fiction written about those types of concepts. It’s terribly difficult to imagine a situation like that that you couldn’t solve. The accidental singularity where the robots become self-conscious: I chuckle at that a bit, because if it happens – and we are nowhere near having it happen – it’s a software that is written and thus can be understood and controlled and limited.

We should be careful. We also shouldn’t be terrified. Artificial intelligence has tremendous potential to help us increase the level of care we get from our healthcare systems and all manners of good. If you think about the discovery of dynamite, you could say, “Isn’t this a terrible thing, because it’s possible to put it under your bed and blow yourself up?” Yes, but why would you do that? We develop ways of carefully storing it and ways of integrating it into society so that it helps us increase our standard of living. There’s a bit of an analogy there.

This is powerful stuff, and thank God it is, because we have powerful challenges that we need to solve. We should responsibly think about what we should be doing with robots and what we shouldn’t be doing with robots.

You shouldn’t be putting a ton of decision systems and giving them life-or-death power, because the robots are just not intelligent enough to do that. You should put robots in the dangerous situations where they could help better understand what’s going on before a person has to make a decision. It’s easy to imagine ways of abusing robots, just like it’s easy to imagine ways of abusing dynamite. We just have to be a little careful in how we approach these things.

Category: Artificial Intelligence

Copyright © Southbank Investment Research 2017. All rights reserved

Southbank Investment Research. Registered office: 2nd Floor, Crowne House, 56-58 Southwark Street, London, SE1 1UN. Registered in England and Wales with company no. 9539630 and VAT no. GB 629 7287 94.

Privacy & cookie policy | Terms and conditions | Top ↑