Civility in the Age of Artificial Intelligence
Vital Speeches of the Day, January 2016, pp. 8-12
“Civility in the Age of Artificial Intelligence”
Address by STEVE LOHR, Author, Data-ism
Delivered as part of the Civility in America series, Ferguson Library, Stamford, Conn., Nov. 17, 2015
The definition of civility typically revolves around the rules, mores and assumptions for how we deal with each other.
The previous talks in this series have focused on that kind of civility in a variety of human activities including sports, education, business and law enforcement. But I’m going to be talking about something that is not human—the increasingly clever computing technology that surrounds us. And how we think about, relate to and interact with this technology.
For the title of this talk, I chose the most evocative term, artificial intelligence, or AI for short. It’s hardly a new concept.
It was “cooked up,” as its author the mathematician John McCarthy once told me, for a grant proposal he wrote in 1955.
He was seeking funds for a conference the following summer at Dartmouth College. It was a brainy marketing pitch.
But it’s also a vision that scientists, engineers and entrepreneurs have been pursuing ever since. Today, we’re making real progress toward that vision. And what I was going to do here is try to unpack some of the dimensions of the technology and its implications.
I thought I would start with my overview on the civility issue—that
is, where we’re likely headed in terms of living with this technology. Artificial intelligence is a general-purpose technology that holds great promise in nearly every field—business, medicine, energy, agriculture and government.
It has the potential to save dollars and lives, conserve resources and curb pollution, and produce smarter decisions in our daily lives and in public policy.
But this technology holds peril as well as promise. It is a powerful seeing and sensing technology that has opened the door to unchecked surveillance, both by government agencies and by corporations. The threat to privacy is real, and troubling.
Then, there is the broader question of con- trol. Will this technology be used to replace human decision-making or enhance human capabilities?
The technology raises some thorny legal and ethical issues. But while it has some unique features, I do think it fits into a broad historical pattern. As a society, we don’t so much come up with a solution, but reach an accommodation with powerful, new technologies. There are always tradeoffs. That’s the way it’s always been.
Fire could cook your food or burn down your hut. Cars cause accidents and pollute the air. But they also are a technology of personal freedom, and have helped create regional and national markets for goods, travel and tourism.
I say accommodation rather than a solution because it will be a balancing act, and people’s comfort levels and government rules will change over time. And getting the balance essen- tially right—so you maximize the benefits and minimize the risks—should be the goal.
History offers some guidance here. At the turn of the previous century, the Kodak snapshot camera was seen as a privacy peril. The Kodak film camera made it possible to photo- graph people in informal settings
and capture spontaneous moments.
Before, photography was a laborious applied chemistry project. The hefty cameras sat on tripods They housed heavy photo-lithographic plates,
and the human subjects had to sit or stand motionless, like stone statues. But with a handheld Kodak cam- era, photo snappers were suddenly taking pictures of people in public places—downtowns, ballrooms and beaches. For a while, they were seen as a menace to society. They were called “camera fiends.”
And when mainframe computers arrived in the 1960s, and the federal government and some large corpo- rations began assembling databases
of citizens and customers, there was an outcry about the potential privacy violations. The author Vance Packard is best known for his expose of
the advertising industry, “The Hidden Persuaders,” published in 1957. But in 1964, he published “The Naked Society,” warning of the privacy threat of posed by computerized data collection and data mining.
In the mainframe era, laws were passed to control what personal data could be collected and how it could be used. In the case of snapshot cameras, people’s expectations of privacy in public places changed.
It will likely be the same with today’s data-driven artificial intelligence technology. We’ll work our way, step by step, toward an imperfect balance, but one in which most people believe that the benefits are worth the costs.
Mention artificial intelligence and the image that most quickly springs to mind is an anthropomorphic automaton, a robot. Recently, a vision of robots threatening not just jobs but perhaps humanity itself has been fueling dire warnings of future trouble— ”summoning the demon,” in the phrase of the technologist-entrepreneur Elon Musk.
But I think we’re getting seriously ahead of ourselves with Terminator- style scenarios. An interesting reality check took place in June, when two dozen teams of leading robotics engineers gathered in Pomona, Calif., for a competition sponsored by the Pentagon’s research agency. Their robots had to navigate mocked-up hazard- ous environments, like a contaminated nuclear plant, and do simple tasks—walk up steps, turn a valve, operate a power drill. The chores would take a human five minutes, or 10 at most. The winning robot took 45 minutes.
Most struggled badly, falling down steps and taking long pauses to figure things out, even with remote control assistance. Turning a knob to open a door proved daunting for many. One young man in the audience observed,
“If you’re worried about the Terminator, just keep your door closed.”
Robots will surely get better. Google’s self-driving cars, for example, are impressive, but in heavy rain or snow, a human had better take the wheel.
Yet artificial intelligence is already all around us, and it’s mostly software. The here-and-now version of AI in software often flies under the banner of data science or big data. Google search and ad targeting, movie and product recommendations on Netflix and Amazon, Apple’s Siri digital assistant and IBM’s Watson question- answering system are all animated by artificial intelligence—the software that is the versatile Swiss Army knife in the AI toolkit, machine learning.
Fed by vast amounts of digital data from sources like the web, sensors, smartphones and genomics, the software actually learns, in its way. The more raw data that is ingested, the smarter the artificial intelligence becomes. All these new data sources are the fuel, and machine-learning algorithms are the engine of new tools of measurement, discovery and decision-making. So what we’re calling data science or big data comes first. The robot armies and the “singularity,” when computer intelligence surpasses human intelligence and forges off on its own, come later—much later, if ever.
One way to understand this is to look at Google. It has a collection of businesses and ambitious ventures that at first glance seem unrelated. It has Internet search and online advertising. Then, it has Nest “learning” thermostats; those self-driving cars, and even Calico, a biotech research unit that is pursing breakthroughs
to overcome aging. These are very different endeavors, from multibillion-dollar businesses to exploratory research. But the underlying technology for each involves vast amounts of data from various sources fed into different flavors of machine-learning software.
Measurement seems quotidian, but it is the grist for data-driven artificial intelligence. And throughout history, tools of measurement have mattered a lot. The microscope, for example, allowed scientists to see the mysteries of life down to the cellular level. Those discoveries transformed the fields of biology and medicine
The telescope—another example—allowed scientists to see the
stars and galaxies as never before. That tool gave us modern astronomy and, to some degree, changed how humans think of their place in the universe. Those were technology tools in specific disciplines: biology and astronomy. The promise of the AI tools of data science is that they will allow people to see, sense and
act more intelligently in nearly every field—science, business, the social sci- ences and policy-making.
McKesson, the giant pharmaceutical distributor, provides an example in business. It ships goods to 26,000 customer locations, from neighborhood pharmacies to Walmart. The main cargo is drugs—roughly 240 million pills a day. It’s a business of high volumes and razor-thin profit margins.
So, understandably, efficiency has been all but a religion for McKesson for decades. Yet in the last few years, McKesson has taken a striking step further by cutting the inventory flow- ing through its network at any given time by $1 billion.
The payoff came from insights gleaned from harvesting all the product, location, and transport data, from scanners and sensors, and then mining that data with clever software to identify time- saving and cost-cutting opportunities. The technology-enhanced view of the business was a breakthrough that Donald Walker, a senior McKesson executive, calls “making the invisible visible.”
Counting, social scientists say, is political—and that is as true in business and science as it is in public policy.
What is being measured, and how is it measured? Data can always be gathered, and patterns can be observed—but is the pattern significant, and are you measuring what you really want to know? Or are you measuring what is most easily mea- sured rather than what is most meaningful? There is a natural tension between the measurement imperative and measurement myopia.
Two quotes frame the issue succinctly. The first:
“You can’t manage what you can’t measure.”
For this one, there appear to be twin claims of attribution, either W. Edwards Deming, the statistician and quality control expert, or Peter Drucker, the management consultant. Who said it first doesn’t matter so much. It’s a mantra in business and in other fields, and it has the ring of commonsense truth.
The second quote is not as well known, but there is a lot of truth in it as well:
“Not everything that can be counted counts, and not everything that counts can be counted.”
Albert Einstein usually gets credit for this one, but the stronger claim of origin belongs to the sociologist William Bruce Cameron—though again, who said it first matters far less than what it says. Data science represents the next frontier in management by measurement. The technologies are here, they are improving, and they will be used. And that’s a good thing, in general. Still, the enthusiasm for big data decision-making would surely benefit from a healthy dose of the humility found in that second quote.
The basic drift of data-driven AI seems unassailable: decisions of all kinds should be increasingly made based on data and analysis rather than experience and intuition. Science prevails; guesswork and rule- of-thumb reasoning are on the run.
Who could possibly argue with that? But there is a caveat. Experience and intuition have their place. At its best, intuition is really the synthesis of vast amounts of data, but the kind of data that can’t be easily distilled into numbers.
I recall a couple of days spent with Steve Jobs years ago, reporting a piece for the New York Times Magazine. Decisions that seemed intuitive were what Jobs called “taste.” An enriched life, he explained, involved seeking out and absorbing the best of your culture—whether in the arts or soft- ware design—and that would shape your view of the world and your decisions.
One afternoon, we went to Jobs’s home in Palo Alto. Several days earlier, he had hosted a small gathering there for Bill Clinton, who was then the president. The living room was still set up as it had been for the presidential visit, nearly empty except for a ring of wooden chairs, American craft classics. The chairs, Jobs observed, were George Nakashima originals, and he then offered a brief account of the Japanese American woodworker’s life. Nakashima had
a cross-cultural blend of experi-
ence, studying architecture, traveling on a free-spirited tour of the world, and working in different cultures.
His designs were original, Jobs said, because Nakashima had a distinctive sense of taste, shaped by his life experience.
Steve Jobs was no quant, but he was an awesome processor of non-numerical data, curious, self-taught and tireless. His real talent, as Jobs himself described it, was seeing “vectors” of technology and culture, where they were headed and aligning to create markets. And as a product- design team leader, he was peerless. Jobs worked at making it all appear effortless, even instinctive. In early 2010, for example, Apple’s iPad tablet computer had been announced, but it was not yet on sale.
Jobs came to the New York Times Building in Manhattan to show off the device to a dozen or so editors and reporters around the company’s boardroom table. An editor asked how much market research had gone into the iPad. “None,” Jobs replied.
“It’s not the consumers’ job to know what they want.”
That, he suggested, was the job of his intuition. Jobs may have been a high-tech product genius, but his intuition was not magic. It was the consequence of his experience, curiosity and hard work.
As towering historical figures go, Frederick Winslow Taylor was deceptively slight. He stood five feet nine and weighed about 145 pounds. But the trim mechanical engineer was an influential pioneer of data-driven decision making, an early manage- ment consultant whose concept of “scientific management” was widely embraced a century ago on factory floors and well beyond.
Taylor applied statistical rigor and engineering discipline to redesign work
for maximum efficiency; each task
was closely observed, measured, and timed. Taylor’s instruments of mea- surement and recording were the stopwatch, clipboard, and his own eyes. The time-and-motion studies con- ducted by Taylor and his acolytes were the bedrock data of Taylorism.
Viewed from the present, Taylorism is easy to dismiss as a dogmatic pen- chant for efficiency run amok. Such excesses would become satirical grist for Charlie Chaplin’s Modern Times. But in its day, scientific management was seen as a modernizing movement, a way to rationalize work to liberate the worker from the dictates of authoritarian bosses and free the economy from price-fixing corporate trusts. Leading progressives of the day were avid proponents, including Louis Brandeis, “the People’s lawyer” and future Supreme Court justice, and Ida Tarbell, the journalist whose investigations of the Standard Oil trust paved the way for the breakup of the company. Taylor’s Principles of Scientific Management, published in 1911, made broad claims about his brand of logical decision-making that echo the enthusiasm we hear today for big data decision-making.
Will data-driven AI prove to be
a digital age version of Taylorism? There is certainly a danger that modern data analysis is misused and abused. Taylorism was a good idea taken to excess in its single-minded pursuit of one goal: labor efficiency. Modern history is filled with examples of the myopic peril of focusing on one data measurement—body counts in the Vietnam War, crime statistics in some police departments, and quarterly earnings in the corporate world. Essentially, people game the system to hit the desired numbers. This kind of behavior even has its own “law”—”Campbell’s law,” named after Donald Campbell, a social psychologist, who studied the phenomenon.
That is a pitfall, for sure. But it
is created not by the technology
but by its misuse. Part of the promise of these data-driven-AI tools is that they open the way both to more fine-grained measurement and to a broader, more integrated look at an organization’s operations. Ideally this technology should widen the aperture of decision-making rather than narrow it.
The biggest single use of the technology so far has been in marketing. Armies of the finest minds in computer science have dedicated themselves to improving the odds of making a sale—tailored marketing, targeted advertising and personalized product recommendations. Shake your head if you like, but that’s no small thing. Just look at the technology-driven shake-up in the advertising, media and retail industries.
Many AI and data quants see marketing as a low-risk—and, yes, lucrative—petri dish in which to hone the tools of an emerging science. As Claudia Perlich, a data scientist who works for an ad-targeting start-up, put it,
“What happens if my algorithm is wrong? Someone sees the wrong ad. What’s the harm? It’s not a false positive for breast cancer.”
But the stakes are rising as the methods and mind-set of data science spread across the economy and society.
Big companies and start-ups are beginning to use the technology in decisions like medical diagnosis, crime prevention and loan approvals. Take consumer lending, a market with several data science start-ups. It amounts to a digital-age twist on the most basic tenet of banking: Know your customer.
By harvesting data sources like social network connections, or even
by looking at how an applicant fills
out online forms, the new data lend- ers say they can know borrowers as never before, and more accurately predict whether they will repay than they could have by simply looking at a per- son’s credit history.
The promise is more efficient loan underwriting and pricing, saving millions of people billions of dollars. But big data lending depends on software algorithms poring through mountains of data, learning as they go. It is a highly complex, automated system—and even enthusiasts have qualms.
As Rajeev Date, an investor in data-science lenders and a former deputy director of Consumer Financial Protection Bureau, put it:
“A decision is made about you, and you have no idea why it was done. That is disquieting.”
For a lot of things, it isn’t really important to know how automation works. People don’t care about how the automatic transmission in their cars works. They just care that it does work.
But in this ever-widening realm of decisions being made by data-driven artificial-intelligence software, the decisions that critically affect people’s lives can’t be a black box. As Danny Hillis, an artificial intelligence expert, put it:
“The key thing that will make it work and make it acceptable to society is story telling.”
Perhaps not so much literal story telling, but more an understandable trail that explains how an automated decision was made. Hillis asks:
“How does it relate to us? How much of this decision is the machine and how much is human?”
The stakes are rising. In these kinds of higher-stakes decisions, space has to be made for human judgment to be part of the mix— algorithms alone aren’t enough.
As in the past, the answer will be a socially determined combination
of rules and tools. Basic rules—and, yes, laws—determined by people and policymakers. And technology tools to carry out the intent of the people it serves.
In the world being opened up by data science and artificial intelligence, a version of the basic principle of
the partnership between humans and technology still holds. Be guided by the technology, not ruled by it.
_______
Steve Lohr is a technology reporter for The New York Times and the author of “Data-ism.’’
_______
Other articles of Steve Lohr