Beware the paperclip

Just before we get started, don’t forget that our Downturn Millionaire Masterclass is happening at 2pm today. It’s free to watch online, so long as you get your name down here in time.For as long as I can remember there have been four overwhelming apocalyptic scenarios constantly circling the news.

They are:

  1. Another world war
  2. Antibiotic resistance
  3. A mutated killer virus
  4. Computers rising up against us.

Of course, there are many more, like asteroids, supervolcanoes, and, well, aliens.

But over the last few years asteroids have become less scary. We now have the technology to land on asteroids and even start mining them. Asteroids have, in a way, been tamed.

Super volcanoes are still up there. But again, NASA is on the case. Its scientists have been looking into ways to avoid the Yellowstone supervolcano from erupting and have settled on a drilling strategy, should the problem ever arise.

Aliens – well, that is a topic for another day.

Of the big four, killer viruses are usually in the headlines the most.

Over the last few years, humanity was supposed to be wiped out by SARS, bird flu, swine flu, Ebola, a re-emergence of smallpox, mad cow disease and many others. At least, that’s what the Daily Mail would have you believe.

The prospect of a third world war is never too far from the headlines, but it would take some massive escalation to bring this on. In my lifetime, this one has not yet posed a serious threat.

So that leaves us with computers rising up against us and antibiotic resistance.

These last two are constantly making the news. With every artificial intelligence (AI) advancement, we see more and more stories about “the singularity” and more and more experts coming out and telling us we’re doomed.

How likely is killer AI?

One of the biggest AI doomsayers is Elon Musk. Here are a few of his best AI quotes:

“AI is a fundamental existential risk for human civilisation.”

“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react.” Yes, he really said this – here’s a clip.

“Mark my words. AI is much more dangerous than nukes.”

But Musk is far from the only prominent AI doomsayer.

Many, many people around the world live in fear of the singularity – the point when AI learns how to teach itself and becomes unimaginably smart almost instantly.

When this happens, we will no longer be the planet’s top dog. If you’ve ever seen any of the Terminator films, this is basically what people fear the singularity will unleash. Only the killer machines will be much smarter and much more efficient and we’ll all be killed almost instantly.

The other fearful AI argument is that it becomes super smart, but in a specific and equally dangerous way.

This is explained perfectly by the “paperclip maximiser” thought experiment:

From LessWrong Wiki:

First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where “intelligence” is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform “first all of earth and then increasing portions of space into paperclip manufacturing facilities”.

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won’t revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.

A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn’t require very high general intelligence.

As you can see, an AI doesn’t have to have super general intelligence to be a threat. It could become a threat from being assigned a seemingly benign task.

However, not every AI expert holds this opinion. Last year, professor of artificial intelligence Toby Walsh wrote an article for Wired explaining why the singularity may never even happen.

Here was his conclusion:

Most people working in AI like myself have a healthy skepticism for the idea of the singularity. We know how hard it is to get even a little intelligence into a machine, let alone enough to achieve recursive self-improvement.

There are many technical reasons why the singularity might never happen. We might simply run into some fundamental limits. Every other field of science has fundamental limits. You can’t, for example, accelerate past the speed of light. Perhaps there are some fundamental limits to how smart you can be?

Or perhaps we run into some engineering limits. Did you know that Moore’s Law is officially dead? Intel is no longer looking to double transistor count every 18 months.

But even if we do get to the singularity, machines don’t have any consciousness, any sentience. They have no desires or goals other than the ones that we give them.

AlphaGo isn’t going to wake up tomorrow and decide humans are useless at Go, and instead opt to win some money at online poker. And it is certainly not going to wake up and decide to take over the planet. It’s not in its code.

All AlphaGo will ever do is maximise one number: its estimate for the probability it will win the current game of Go. Indeed, it doesn’t even know that it is playing Go.

So, we don’t have to fear that the machines are going to take over anytime soon.

So, is AI just another SARS, or is it a real threat akin to a third world war? Well, the jury is still out on that one.

Antibiotic apocalypse

As for antibiotic resistance – an absolute media favourite scare story – there is now hope it could be avoided.

From the BBC last week:

Scientists say they have engineered a new antibiotic that appears promising in early clinical trials.

The drug, made by Shionogi Inc, acts like the Trojan horse in Greek legend to trick bacteria into allowing it to enter.

Trials on 448 people with a kidney or urinary tract infection suggested the drug was as effective as current treatments.

Digging a bit deeper, it turns out that this new antibiotic (cefiderocol) was actually more effective than current treatments.

From The Times:

Now a trial of 448 adults with urinary infections resistant to multiple drugs showed that 73 per cent responded to cefiderocol after a week compared with 55 per cent on the standard combination of imipenem-cilastatin, according to results in The Lancet Infectious Diseases.

Simon Portsmouth, of Shionogi Inc, which makes the medicine, said: “Our results support cefiderocol as a novel approach that might be used to overcome gram-negative resistance.”

However, all the sources I can find emphasise cefiderocol is not a silver bullet for antibiotic resistance. As the BBC notes:

Once cefiderocol is smuggled inside, it kills bacteria in the same way as current antibiotics.

Experts say that new classes of antibiotics – that attack bacteria in completely new ways – are urgently needed.

However, it is promising that big developments are being made in this area.

So I guess that just leaves us with world war three to worry about. Could the end of Europe bring on such a conflict?

My colleague Nick Hubble is much better positioned to comment on that scenario. He’s just finished his new book, How the Euro Dies, in which he outlines what he believes will be “the biggest bust in history”.

How will it play out, and what will be the consequences? Find out in Nick’s book.

Until next time,

Harry Hamburg
Editor, Exponential Investor

Category: Genetics and Biotechnology

From time to time we may tell you about regulated products issued by Southbank Investment Research Limited. With these products your capital is at risk. You can lose some or all of your investment, so never risk more than you can afford to lose. Seek independent advice if you are unsure of the suitability of any investment. Southbank Investment Research Limited is authorised and regulated by the Financial Conduct Authority. FCA No 706697. https://register.fca.org.uk/.

© 2019 Southbank Investment Research Ltd. Registered in England and Wales No 9539630. VAT No GB629 7287 94.
Registered Office: 2nd Floor, Crowne House, 56-58 Southwark Street, London, SE1 1UN.

Terms and conditions | Privacy Policy | Cookie Policy | FAQ | Contact Us | Top ↑