Would you like to play a game?
I’ve been pitting my wits against some verbal reasoning questions this morning. I’ll explain why in a second – all will become clear.
I’ll give you three questions. Answers at the end of today’s letter. Let’s see how you do.
1. All is to many, as few is to… some, never, none, or always?
2. Prose is to poetry, as conversation is to… song, poem, language, or listening?
3. Earth is to ball, as pancake is to… flat, flag, soccer, or disc?
Disclaimer: I didn’t get all of these right. So no pressure. Well, a little bit of pressure, but the friendly kind.
Anyway, why am I testing my intelligence against these sorts of questions? And why am I asking you to test yours?
The clue is in the question: We’re talking about intelligence. Specifically the artificial kind.
Earlier in the year a team of scientists at the University of Science and Technology in Beijing announced a significant breakthrough in the development of artificial intelligence. Using something called machine learning (or deep learning), they taught a computer to take a verbal reasoning test.
What’s interesting is, for the first time in history, the computer actually beat the average human in a straight-up test.
A little bit of background. There are usually three different types of question in an IQ test. The first is category is logic questions, such as spotting patterns. The second is mathematical, such as finding patterns and spotting sequences within numbers.
We’ve been able to create computers that are able to tackle these sorts of questions for a long time – primarily because there’s usually a logic involved that a computer can be programmed to understand. We can break things down to a series of ‘If A and B are present then do C’ commands.
The final category is verbal reasoning questions of the sort I started this letter with. These are trickier. We can’t boil language down into a series of simple rules or commands. Context is vital. The word ‘share’, for instance, has a variety of different meanings depending on how and where it’s used. The person selling their share of a business in order to pay their fair share of tax may share their story on Facebook. You get the picture.
It’s tough for a computer algorithm to get its head around this kind of stuff, if you’ll pardon the expression.
So until recently, humans were the master of machines when it came to verbal reasoning.
Until this summer. The MIT Technology Review had the story:
“They [the team behind the project] compare this deep learning technique with other algorithmic approaches to verbal reasoning tests and also with the ability of humans to do it. For this, they posed the questions to 200 humans gathered via Amazon’s Mechanical Turk crowdsourcing facility along with basic information about their ages and educational background.
“And the results are impressive. “To our surprise, the average performance of human beings is a little lower than that of our proposed method,” they say.”
The reason for this breakthrough, as I said, is a new way of developing intelligent machines that has exploded onto the scene in recent years.
Processing power has increased to such a degree that instead of programming a computer an arduous and ever more complex series of commands, scientists can now build computer algorithms capable of learning and improving over time.
That has profound implications for the world.
It shifts our thinking away from asking “how do we design this machine do want we want it to?” and makes us ask instead “how do we teach this machine to achieve that?”
To grossly oversimplify things – and apologies if I’m doing a disservice to any computer programmers out there! – it works by analysing vast, vast data sets over time, and developing an almost intuitive understanding of the patterns and relationships at work.
Show it enough pictures of a dog in the park and it’ll learn to spot that dog in a totally different location, in a context it hasn’t come across before. Eventually, it’ll be able to spot and differentiate between different species of dogs, even if it’s never seen them before. It starts to learn patterns and relationships beyond the immediate sphere of what you’ve shown it.
That’s the dog-spotting industry turned on its head, then. The world will never be the same.
The point is, this has applications in any number of industries. The world produces an astonishing amount of data every single day. And not just pictures of dogs walking in the park. I’m talking financial reports, intelligence bulletins, medical tests, social media posts, blogs, news stories, YouTube videos… the list is endless. It’s all data to a machine can learn from and master.
Let’s take a really profound example. A couple of years ago, a team of scientists at Stanford used an early form of deep learning algorithm to help doctors diagnose patients with breast cancer.
In simple terms, they did this by taking a huge amount of data they had about people who’d already been diagnosed – things like microscopic images of tissue samples – and allowing the algorithm to analyse it. That’s not so different to the way you’d teach a human; you’d give them all the data and information you could and allow them to learn from it.
The difference is, a computer can analyse an absolutely immense amount of data. Hundreds of thousands – millions, potentially even billions – of images, patient histories and the like. Over time it can then start to understand the patterns and hidden connections between all that data. It can learn to spot things even some of the best doctors miss.
The upshot of the Stanford study? The computer analyses were more accurate in diagnosing patients than humans were.
(The model used was called the “Computational Pathologist” or “C-Path”, if you’re interested in looking it up.)
And this is an area of technology that’s moving on all the time. One of the most impressive speakers in California last week was the scientist and entrepreneur Jeremy Howard. Howard is the co-founder and CEO of a company called Enlitic (named one of MIT Tech Review’s top 50 smartest companies of 2015).
Enlitic is developing machine learning algorithms capable of hugely enhancing our ability to diagnose illness more quickly, accurately, earlier and with fewer mistakes than human doctors.
It takes the same approach as the “C-Path” system, by “learning” from the vast amounts of data the medical industry churns out – the thousands of x-rays, MRIs, CT scans and other data. Howard believes the system could be used initially to filter through information and flag the patients a doctor should be taking a closer look at, saving vast amounts of time and money.
But that’s just the beginning.
In one study Howard told us about, the algorithm “competed” against a team of four world leading doctors. The doctors had a 7% “false negative” rate.
The machine? Zero. It had no false negatives at all.
That’s incredible. Especially when you consider that the machine was up against four world renowned experts in their field – the kind of people most patients would give anything to be treated by. Enlitic’s algorithm could, in theory, be entirely open to use by anyone in the world.
It’s not hard to see where this is going. This kind of highly intelligent, specialised narrow AI is going to radically change the way not just medicine but all sort of different industries.
And one final thought to end on. This is still an emerging industry. It’s in its infancy. In the verbal reasoning example I showed you earlier, a machine beat the average human. And even Enlitic’s technology – as sophisticated as it is – is still learning.
But this is where Moore’s Law comes in.
We know that computer processing power is increasing at an exponential rate, doubling every 18 months or so. That allows us to extrapolate trends into the future and understand how they’ll develop. Remember those three words that are behind all change in the tech industry: faster, cheaper, smaller.
If our current computing power allows us to build this kind of intelligent learning machine, capable of competing with the best and brightest doctors on the planet…
What will the technology be capable of in five years’ time, three doublings away, when computing speeds are eight times what they are today? When the same technology is a fraction of the cost or size? When it’s cheap and portable enough to be accessed by everyone on the planet?
I’ll be in touch with my thoughts on that soon. In the meantime, why don’t you tell me what you think? Yesterday’s piece prompted some fascinating responses. You can reach me at email@example.com. I’ll share the most relevant messages tomorrow.
PS Answers: 1) None. 2) Song. 3) Disc.
How’d you do?
Category: Artificial intelligence