The threat from AI is real, but everyone has it wrong

The threat from AI is real, but everyone has it wrong

BY Robert Munro, CEO Idibon

I’m one of the few (perhaps only) CEOs of an Artificial Intelligence company that has also worked in post-conflict development. I’ve been watching the “Artificial Intelligence Apocalypse” debate over the recent months with interest, as Elon Musk, Bill Gates and even Stephen Hawking have weighed in on the dangers of Machine Intelligence surpassing our own and the danger of the machines taking over as a consequence. Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford, goes into detail about the threat in Super Intelligence, arguing that machines could pass human intelligence and suddenly decide to eradicate humans in any number of ways.

Violent conflict rarely breaks out suddenly. Power is ceded gradually months, years and decades in advance, often making the conflict near inevitable. By contrast, the recent debate has largely portrayed the danger coming suddenly from autonomous and sentient robots. Commentators outside of the artificial intelligence community have missed that machine intelligence already controls much of our day-to-day: we are ceding our decision making to machines we don’t even know are there. My company, Idibon, currently helps in information processing globally (using AI) for organizations like the United Nations, often in conflict settings. My Stanford PhD was focused on processing communications with Natural Language Processing (Machine learning applied to language) in disaster and health contexts. Prior to that I worked in post-conflict development in Sierra Leone and Liberia, more recently helped in a number of countries during the Arab Spring uprising and on global epidemic tracking. The conflicts and outbreaks might have made the news quickly, but they were a long time coming.

The danger from AI is incremental. By focusing on a sudden apocalypse that relies on robots passing some threshold of autonomy/sentience, the debate has been misdirected. I first warned about this in 2012, posted as a blog on our company site a year later:

“While our predictions about robots may have over-emphasized their human form, their ability to speak, and their seemingly autonomous nature, history did get one part right: we need to be aware of how the balance of control is shifting.“

– Robert Munro Where’s my talking robot?

In the 2012 article, I talked about how AI is already making decisions for humans in education, friendships, news, health, mobility, and conflict. I’m revisiting the situation now three years later, adding my risk analysis of each context. The risk is two-fold: how much power have we already given over to AI and is how well is the AI aligned with human wellbeing?

Education:

Current control: low

Future control: high

Danger: low

In the US, the Graduate Record Examination (GRE) is taken by every student applying to graduate school. One of the examiners for the written task is an automated process, grading the quality of your writing by evaluating a complex combination of the words that you use and the structure of your sentences. There are still human graders for every exam. It is not the AI alone who have determined a million or more educational futures.

The current control is low, but in the future we can expect to see automated grading as a standard feature of education. The danger is low: the objective of the AI is aligned with the human graders. Wrapping AI into education can also allow personalized and remote education in ways that wouldn’t otherwise scale where there are poor teacher-student ratios, which is most of the world.

Friendships

Current control: medium

Future control: high

Danger: high

Every social network employs some form of AI to decide what you end up seeing from your friends and who is recommended as future connections. For some people, this will mean that the majority of their interactions with friends and family are already being mediated by AI.

The dangers here are high. You aren’t meant to see pictures of your ex having fun with new people: your brain is wired to remember only good memories and social networks are bypassing this. The big danger is that social networks aren’t optimizing for your well-being: they are optimizing for the time you spend on the site and (relatedly) the number of ads you click on.

Filtering your news

This is now the same as controlling your friendships. When I wrote the original article three years ago, there were news personalization service that were distinct from social networks, but most no longer exist only a few years later and news personalization has largely become something we get via our social networks.

Protecting your health

Current control: low

Future control: high

Danger: low

Machine intelligence will come to help every aspect of healthcare, from global epidemic tracking to point-of-care diagnostics. Allowing doctors to scale their capacity by automating many processes through AI is the only viable way to scale medical services to the majority of the world that do not currently have adequate health care and for which we are unlikely to increase doctor-patient ratios anytime in the coming decades.

The risk comes from invasions to privacy. This includes allowing AI to profile your risk for diseases early on – overwhelmingly a good thing if done correctly – but use that information to avoid providing insurance and care.

Moving you around

Current control: low

Future control: very high

Danger: low

Your GPS and mapping service already uses extensive AI, but you’re still the one in the driver’s seat. At most AI’s current control is just determining how hard to slow/stop when you hit the brake pedal. This will change to when almost all motorized transport is mostly or wholly controlled by machine intelligence.

This will be the first ‘talking robot’ that most people will own: you’ll strap yourself inside of it, the doors will lock, and you’ll move down the highway at 100 kilometers and hour. The risk mostly comes from security of computerized systems, but the connectivity and computerization of transport will happen regardless, and whether or not AI is involved won’t change much.

The accidents that do occur with autonomous vehicles will not pattern like accidents that occur from humans. This is something that we will have to comes to terms with ethically: if we reduce road fatalities by 99%, but the remaining 1% are not errors that a human would make, how liable are those machines relative to their human counterparts?

Killing your enemies

Current control: medium

Future control: very high

Danger: very high

In 2011 I was part of humanitarian a response exercise where companies, not-for-profits and military emergency response experts conducted exercises together to explore ways to collaborate in their response to international disasters. I was the only AI person there, but I had an interesting conversation with a high-ranking US military official about drones that used AI. In our case, we were looking at how to use drones to deliver supplies and capture information following disasters, but the talk quickly turned to militaristic uses. This was, of course, where the funding came from. While the military had the ability for drones to be set to ‘auto’ when firing missiles at locations, this officer had decided that the actual ‘fire’ button must still be manual. In other words, the drone or similar unmanned autonomous vehicle (UAV) can fly by itself to the target, but it was not permitted to fire automatically when at that target.

This is where the talk of future militaristic robots also falls short: we already have robots that can kill, and the decision to restrict this was already made 4+ years ago. Contrast this to the recent open letter to prevent an AI “arms race” , which speaks about it as possibility in the future, and it is clear that even some of the most openly concerned leaders are unaware that they are late to the party. It’s right to continue the debate and make it public, but thinking that the problems will wait until we have sentient robots is waiting too late.

In 2012, I gave the example of how speaker identification was used to capture one of the masterminds of 9/11 who gave information that lead to Osama Bin Laden’s capture. In the years since, it came to light that the torture did not produce useful information, so AI might not have been part of how Bin Laden was captured. But in many other ways, AI is being used for military intelligence that infringes human rights. The negatives out-weigh the positives, and so this is one way that we guide the use of AI in the right direction. Looking across borders, the protection of one country’s security is optimized when every other country’s citizens are monitored and manipulated as closely as possible. The most sophisticated belligerent hackers are already using AI to semi-automate their attacks, and many have fallen along nationalistic lines in recent times.

It is difficult to imagine any situation in which machine intelligence will optimize the human rights of all country’s citizens, rather than short term national objectives. Idibon will not help a country spy on private citizens, even through several countries have offered us veritable blank checks to help them. I hope that CEOs of other emerging AI companies will take the same stands.

It’s the little things

Technology advances are not always a path to an apocalypse, sometimes just inequity. For example, every time a small company processes language inputs by assuming that spaces mark word boundaries (like in English), they are not on a path to world destruction, but they are disadvantaging the majority of the world’s languages. Even a company as big as Google has not being able to yet fully untangle itself from early architectural designs that were made in ignorance of the world’s full linguistic diversity.

People are doing the same with AI right now, creating machine intelligence for optimizing something other than a person’s best interests: the amount of times an end-user clicks on an advertisement, the amount of time spent on a particular website, the total amount of money spent on products. This broadly summarizes Google’s, Facebook’s and Amazon’s business models, and the AI used is making decisions that are too complicated for humans to fully understand because of the sheer number of factors that they are taking into consideration.

Looking forward

We should worry about any technology that we are relying on in our daily lives. Our company is the exception in having explicit terms of use that prohibit infringement of human rights with AI. Silicon Valley is over-represented in global diversity, but under-represented in people who have worked or come from disadvantaged environments, meaning that these conversations don’t happen often enough.

 

 

 

 

 

You may also like...