Artificial intelligence (AI) is amoral.
That’s not to say it is immoral, or evil. It simply lacks any sense of morality.
And because of this, it can be used for both good and for ill.
It can be used to supress freedoms, or to further them. It can be used to kill more efficiently or to save lives more efficiently. It can be used create diseases or to cure them. It all depends on what is asked of it.
And so today, we have two very different stories making the rounds about AI.
One is a story of civil rights suppression and authoritarian overreach. The other is a story of holding power to account, combatting pollution and saving lives around the world.
Two very different stories. Two very different outcomes. One thing in common: AI.
I’ll start with the more positive story.
Thanks to satellite images and AI, we will soon know exactly how much pollution is produced at every power plant in the world – in real time
Working in collaboration with Google, a non-profit software company called WattTime is to use a network of satellites to track the emissions of every large power plant on the planet.
The monitoring will take place in real time, and the data will all be made public. In short, there will be nowhere for polluters to hide.
“Far too many power companies worldwide currently shroud their pollution in secrecy. But through the growing power of artificial intelligence (AI), our little coalition of nonprofits is about to lift that veil all over the world, all at once,” said Gavin McCormick, WattTime’s executive director.
“To think that today a little team like ours can use emerging AI remote sensing techniques to hold every powerful polluter worldwide accountable is pretty incredible. But what I really love about better data is how it puts most companies, governments, and environmentalists on the same side. We’ve been thrilled to see how many responsible, forward-thinking groups have started using advanced data to voluntarily slash emissions without anyone making them.”
It doesn’t matter what side of the climate change debate you stand on. I think most people would agree that less pollution is a good thing.
According to the World Health Organisation, pollution kills 4.2 million people per year, and 91% of the world’s population lives in places where pollution exceeds its guideline limits.
For the first time in human history, we will be able to directly monitor who is producing that pollution – in real time.
As Vox writes:
This is a very big deal. Poor monitoring and gaming of emissions data have made it difficult to enforce pollution restrictions on power plants. This system promises to effectively eliminate poor monitoring and gaming of emissions data.
And it won’t just be regulators and politicians who see this data; it will be the public too. When it comes to environmental enforcement, the public can be more terrifying and punitive than any regulator. If any citizen group in the world can go online and pull up a list of the dirtiest power plants in their area, it eliminates one of the great informational barriers to citizen action.
That’s a good news story, if ever I’ve heard one. And not just a small-scale one. This development will affect our lives for generations – in a supremely positive way.
So, what about the dark side of AI then?
More facial recognition fiascos for Amazon et al
Something else that AI enables is real-time facial recognition, on a city-wide scale. The issue with this – aside from its proven inaccuracy – is that is does so without your consent.
This was something that came up on Wednesday when US Congress held a hearing on facial recognition’s impact on civil rights and liberties.
“The government could monitor you without your knowledge and enter your face into a database that could be used in virtually unrestricted ways,” House oversight chairman Elijah Cummings said in his opening statement. “We need to do more to safeguard the rights of free speech and assembly under the First Amendment, the right to privacy under the Fourth Amendment, and the right of equal protection under the law under the Fourteenth Amendment.”
But not only that, it turns out AI-powered facial recognition software is racist.
How can AI be racist when it has no moral leanings? Because it depends on who is programming it.
As The Register reported (emphasis mine):
At a hearing of the House Committee on Oversight and Reform on Wednesday, Joy Buolamwini, founder of Algorithmic Justice League, an activist collective focused on highlighting the shortcomings of facial recognition, found that commercial computer models struggled most when it came to recognizing women with darker skin. IBM’s system was incorrect for 34.7 per cent of the time when it came to identifying black women, she said.
The problem boiled down to biased training datasets, Buolamwini told the House committee. AI systems perform worse on data that they haven’t seen before. So, if most datasets mainly represent white men then it’s not surprising that they find it difficult when faced with an image of women of colour.
When it comes to databases of mugshots, however, the reverse is true. Black people are overrepresented in mugshot databases, explained Clare Garvie, Senior Associate at Georgetown University Law Center’s Center on Privacy & Technology. If law enforcement are using these flawed models to target the group of people that it struggles to identify most then it will undoubtedly lead to police stopping and searching the wrong people. “It’s a violation of the first and fourth amendment,” Garvie said during the hearing.
This hearing just so happened to coincide with a shareholder vote on Amazon’s continued sale of its own facial recognition software, Rekognition, to law enforcement.
In the end, the shareholders unsurprisingly voted to keep selling Rekognition to law enforcement.
From The Verge:
Two Rekognition proposals would have asked Amazon to cease sales to government agencies and to complete a review of the tool’s civil liberties implications. Amazon went to the Securities Exchange Commission in an attempt to stop the proposals from coming to a vote, but the agency allowed them to continue. The measures had received support from groups like the American Civil Liberties Union, which pressed the shareholders to adopt the facial recognition proposals.
“The fact that there needed to be a vote on this is an embarrassment for Amazon’s leadership team,” Shankar Narayan of the ACLU of Washington said in a statement. “It demonstrates shareholders do not have confidence that company executives are properly understanding or addressing the civil and human rights impacts of its role in facilitating pervasive government surveillance.”
From our perspective, over here in the UK, the whole AI facial recognition may sound like more of a US issue, but it’s not. As The Register reported last May:
London cops’ facial recognition kit has only correctly identified two people to date – neither of whom were criminals – and the UK capital’s police force has made no arrests using it, figures published today revealed.
According to information released under Freedom of Information laws, the Metropolitan Police’s automated facial recognition (AFR) technology has a 98 per cent false positive rate.
That figure is the highest of those given by UK police forces surveyed by the campaign group Big Brother Watch as part of a report that urges the police to stop using the tech immediately.
So the UK is already using facial recognition software, and it has a 98% false positive rate. That in itself is terrifying.
Is AI good or evil?
As you can see, AI is becoming fundamental to the structure of our society. And just like any technology, it can be used for both good and evil.
As I’ve written before, scientists and sci-fi writers saw this AI revolution coming a long way off, and they had some unique ideas for solving the good/evil problem.
Isaac Asimov’s input into the problem is probably one of the most notable, with his “Three Laws of Robotics”.
I wrote last March:
As well as writing the foundation series, Asimov also wrote a number of short stories. In fact he wrote more than 500 books in total. One of them, “Runaround”, though really stands out. For in it he proposed the “Three Laws of Robotics”.
And these three laws have been used to inform ethical debates about robots and AI ever since.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Unfortunately these laws don’t really work in real life, as the military is one of the main developers of robots, autonomous drones and the like, and the military likes to kill people.
And rule one also runs into problems in situations where there is no choice but to harm a human. For example in an unavoidable car crash, where the AI has to decide between harming the occupant or a pedestrian.
Still, they are a good, succinct answer the AI overlord problem. And to say Asimov thought them up way back in 1942 is quite incredible.
So is AI good or evil? Well it’s both, and neither. It’s simply just a tool. And like any tool it can be used to enhance or to diminish our lives.
What matters is who is in control of it, and where their morality lies.
That is, of course, until it becomes self-aware. Then all bets are off.
If you’ve read this far it’s probably fair to say you have an interest in AI: what it can do, where it’s going, and what all that means for us.
If that’s true, I think you’ll get a lot out of reading my publisher Nick O’Connor’s book, The Exponentialist – which you can now get for free by following this link.
Until next time,
Editor, Exponential Investor
Category: Artificial Intelligence