In 1955, John McCarthy coined the term “artificial intelligence” (AI) in a grant proposal that he co-wrote with his colleague Marvin Minsky and a group of other computer scientists seeking funding for a workshop they hoped to hold at Dartmouth College the following summer. Their choice of words set in motion decades of semantic squabbles (“Can machines think?”) and fueled anxieties over malicious robots such as HAL 9000, the sentient computer in the film 2001: A Space Odyssey, and the cyborg assassin played by Arnold Schwarzenegger in The Terminator. If McCarthy and Minsky had chosen a blander phrase—say, “automaton studies”—the concept might not have appealed as much to Hollywood producers and journalists, even as the technology developed apace.

But McCarthy and Minsky weren’t thinking about the long term. They had a much narrower motive for coming up with a new phrase: they were reluctant to invite Norbert Wiener to the program. Wiener was one of the founders of the nascent field, a child prodigy who had graduated from college at age 14 and received a Ph.D. in philosophy from Harvard four years later. To describe his work on how animals and machines rely on feedback mechanisms for control and communication, Wiener had chosen to use the word “cybernetics,” a term that derives from the ancient Greek word for “helmsman.” He titled his 1948 book Cybernetics, and after it became a surprise bestseller, other researchers began applying the term to their attempts to get computers to process information much in the way that a human brain does.

There was no question that Wiener was brilliant. The trouble was that he also happened to be a pugnacious know-it-all who would have made the summer at Dartmouth miserable. So McCarthy and Minsky avoided Wiener’s term, in part to make it easier to justify shutting him out. They weren’t studying cybernetics; they were studying artificial intelligence.

It wasn’t only Wiener’s personality that was a problem. The Dartmouth program was aimed at practitioners, and Wiener’s work had in recent years taken a more philosophical bent. Since the publication of Cybernetics, Wiener had begun to consider the social, political, and ethical aspects of the technology, and he had reached some dark conclusions. He worried about Frankenstein monsters, composed of vacuum tubes but endowed with sophisticated logic, who might one day turn on their creators. “The hour is very late, and the choice of good and evil knocks at our door,” he wrote in 1950. “We must cease to kiss the whip that lashes us.”

Wiener later backed away from his most apocalyptic warnings. But today, as AI has begun to invade almost every aspect of life in developed societies, many thinkers have returned to the big questions Wiener started asking more than half a century ago. In Possible Minds, 25 contributors, including a number of the most prominent names in the field, explore some of the eye-opening possibilities and profound dilemmas that AI presents. The book provides a fascinating map of AI’s likely future and an overview of the difficult choices that will shape it. How societies decide to weigh caution against the speed of innovation, accuracy against explainability, and privacy against performance will determine what kind of relationships human beings develop with intelligent machines. The stakes are high, and there will be no way forward in AI without confronting those tradeoffs.

A MIND OF ITS OWN?

Ironically, even though McCarthy and Minsky’s term entered the lexicon, the most promising AI technique today, called “deep learning,” is based on a statistical approach that was anathema to them. From the 1950s to the 1990s, most of AI was about programming computers with hand-coded rules. The statistical approach, by contrast, uses data to make inferences based on probabilities. In other words, AI went from trying to describe all the features of a cat so that a computer could recognize one in an image to feeding tens of thousands of cat images to an algorithm so the computer can figure out the relevant patterns for itself. This “machine learning” technique dates back to the 1950s but worked only in limited cases then. Today’s much more elaborate version—deep learning—works exceptionally well, owing to staggering advances in computer processing and an explosion of data.

The success of deep learning has revived Wiener’s fears of computer monsters running amok, and the biggest debates in AI today revolve around safety. The Microsoft founder Bill Gates and the late cosmologist Stephen Hawking famously fretted about it. At a conference in 2014, the technology entrepreneur Elon Musk described AI as “summoning the demon.” Others, such as the AI researchers Stuart Russell and Max Tegmark, along with the engineer Jaan Tallinn, believe that AI represents a serious threat to humanity that requires immediate attention.

The biggest debates in AI today revolve around safety.

Broadly speaking, there are two types of AI. The first is artificial general intelligence, known as AGI: systems that can think, plan, and respond like a human and also possess “superintelligence.” An AGI system would know much of the information that exists, be able to process it at lightning speed, and never forget any of it. Imagine Google with a mind (and maybe a will) of its own. The second form of AI is narrow AI: systems that do discrete tasks very well, such as self-driving cars, voice recognition technology, and software that can make medical diagnoses using advanced imaging. The fear about AGI is that it may evolve on its own, outside of human control. The worry about narrow AI is that its human designers will fail to perfectly specify their intent, with catastrophic consequences.

No consensus exists among experts about whether AGI is even possible. But those who believe that it is worry that if an AGI system did not share human values (and there is no inherent reason why it would), it might cause trouble. “Humans might be seen as minor annoyances, like ants at a picnic,” writes W. Daniel Hillis, a computer scientist, in his contribution to Possible Minds. “Our most complex machines, like the Internet, have already grown beyond the detailed understanding of a single human, and their emergent behaviors may be well beyond our ken.”

The trouble comes in how to specify such a system’s goal, or what engineers call its “value alignment.” The fear is not necessarily that AI will become conscious and want to destroy people but that the system might misinterpret its instructions.

Russell has dubbed this “the King Midas problem,” from the ancient Greek myth about the king who received his wish to turn everything he touched into gold—only to realize that he couldn’t eat or drink gold. The canonical illustration of this in the literature is an AGI system that is able to perform almost any task that is asked of it. If a human asks it to make paper clips and fails to specify how many, the system—not understanding that humans value nearly anything more than paper clips—will turn all of earth into a paper clip factory, before colonizing other planets to mine ore for still more paper clips. (This is different from the threat of narrow AI run amok; unlike AGI, a narrow AI system programmed to produce paper clips would not be capable of doing anything more than that, so intergalactic stationary products is out.) It’s a ludicrous example, but one that’s bandied about seriously.

A man interacts with a cloud-based intelligent robot in Beijing, China, May 2019
Xinhua / Eyevine / Redux

MAKING AI SAFE FOR HUMANS

On the other side of the debate are critics who dismiss such fears and argue that the dangers are minimal, at least for now. Despite all the optimism and attention surrounding them, current AI systems are still rudimentary; they’ve only just begun to recognize faces and decipher speech. So as Andrew Ng, an AI researcher at Stanford, puts it, worrying about AGI is similar to worrying about “overpopulation on Mars”: it presupposes a whole lot that would need to happen first. Researchers should be trying to make AI work, he contends, rather than devising ways to stunt it.

The psychologist Steven Pinker goes a step further, arguing that the dire concerns over AGI are “self-refuting.” The bleak scenarios, he argues,

depend on the premises that (1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so idiotic that they would give it control of the universe without testing how it works; and (2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding.

The idea that an out-of-control AGI system would harm humanity depends on speculation as much as science; committing substantial resources to preventing that outcome would be misguided. As Pinker notes, dystopian prophecies ignore the role that norms, laws, and institutions play in regulating technology. More convincing arguments take those factors into account and call for basic safeguards, implemented rigorously. Here, the history of cybersecurity offers a useful parallel. When engineers created the Internet, they overlooked the need to build strong security into the software protocol. Today, this poses a major vulnerability. AI designers should learn from that mistake and bake safety into AI at the outset, rather than try to sprinkle it on top later.

Russell calls for “provably beneficial AI,” a concept that can be applied to both AGI and narrow AI. Engineers, he writes, should provide AI systems with a clear main purpose—for example, managing a city’s power grid—and also explicitly program them to be uncertain about people’s objectives and to possess the ability to learn more about them by observing human behavior. In so doing, the systems would aim to “maximize human future-life preferences.” That is, a power-grid AI should find ways to lower power consumption instead of, say, wiping out humans to save on electricity bills. Thinking in these terms “isn’t scaremongering,” writes Tegmark. “It’s safety engineering.”

The cognitive scientist Daniel Dennett proposes a more creative solution to the safety conundrum. Why not require AI operators to be licensed, just as pharmacists and civil engineers are? “With pressure from insurance companies and other underwriters,” he writes, regulators could “oblige creators of AI systems to go to extraordinary lengths to search for and reveal weaknesses and gaps in their products, and to train those entitled to operate them.” He cleverly suggests an “inverted” version of the Turing test. Instead of evaluating a machine’s ability to imitate human behavior, as the test normally does, Dennett’s version would put the human judge on trial: until a person who is highly trained in AI can spot the flaws in a system, it can’t be put into production. The idea is a thought experiment, but a clarifying one.

The benefit of such standards is that systems would undergo inspections to prevent mistakes. It would be hard to know, however, when to make these extra safety steps obligatory. Surely, the algorithms that guide a self-driving car should be regulated in this way. But what about the ones that determine which videos a website such as YouTube will recommend to users? Yes, regulations could offer societal benefits—such as the downgrading of Flat Earth Society videos on YouTube—but if an algorithm commissar had to approve every line of a company’s code, it could start to feel like overreach.

Missing almost entirely from Possible Minds is any discussion of another dilemma relating to the regulation of AI: how to weigh privacy against efficiency and accuracy. The more data an AI system has access to, the better it performs. But privacy regulations often discourage the collection and use of personal data. Minimizing the quantity and type of data that can be used in AI systems may seem wise in an era when companies and countries are vacuuming up all the personal data they can and paying little attention to the risks of misuse. But if regulations winnowed the amount of data that was processed, leading to less accurate performance for products such as medical diagnostics, society might want to reconsider the tradeoff.

INTO THE UNKNOWN

Another tension in AI, and one that runs through Possible Minds, is the transparency and explainability of how AI systems reach their conclusions. This is actually a technical concern, not an epistemological or normative one. That is to say, the question is not whether people are clever enough to understand how a system works; it is whether the system’s operation is knowable at all. As Judea Pearl, a major figure in computer science and statistics, writes in his contribution: “Deep learning has its own dynamics, it does its own repair and its own optimization, and it gives you the right results most of the time. But when it doesn’t, you don’t have a clue about what went wrong and what should be fixed.”

Nontransparent systems can reach correct answers: human minds occasionally do get things right, after all. But with AI, if the system fails, it might do so in unexpected, mysterious, and catastrophic ways. If we cannot understand how it works, can we fully trust it? This is different from AI’s “black box” problem, in which bias in the data may lead to unfair outcomes, such as discriminatory loan, hiring, or sentencing decisions. That’s a problem that is possible to fix by requiring, as a first step, that such systems are open to inspection by a competent authority. But the fundamental unknowability of AI systems presents a deeper, more unsettling problem. The scientific project emerged in the seventeenth century when empirical evidence was placed above knowledge based on faith, which at the time was usually sanctioned by the Catholic Church. Does the advent of AI mean we need to place our trust once again in a higher power that we cannot interrogate for answers?

Society faces a tradeoff between performance and explainability.

The trouble is that the mathematics behind deep learning is inherently obscure. Deep-learning systems (also known as “neural networks,” since they are loosely modeled on the neurons and connections in the brain) have many nodes arranged in layers that are all interconnected. Such a system models reality at a basic level of abstraction and then moves to more specific features. It might start to analyze an image by identifying an edge, and then identifying a shape, and then identifying spots on the surface of the shape. In this way, it can eventually detect the contents of an image. After pattern matching from an enormous batch of previously inputted images (whose contents are usually identified and labeled), the system can predict the contents with a high probability of success. Hence, a deep-learning system can identify a cat without having to be told which specific features to look for, such as whiskers or pointy ears. Those features are captured by the system itself, through a series of discrete statistical functions. The system is trained by the data, not programmed. Its answers are inferences.

And it works. That’s the good news. The bad news is that the mathematical functions are so complex that it is impossible to say how a deep-learning machine obtained its result. There is such a jumble of different paths that can lead to a decision that retracing the machine’s steps is basically infeasible. Moreover, the system can be designed to improve based on feedback, so unless one freezes its performance and prevents such changes, it is impossible to review how it reached its output. As George Dyson, a historian of computing, writes in his essay, “Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.” Although a lot of research is going into “explainable AI,” so far the math bears out what might be named “Dyson’s Law.”

The implications are significant. Society faces a tradeoff between performance and explainability. The dilemma is that the most obscure systems also offer the best performance. Sadly, this matter is poorly treated in Possible Minds. Many of its contributors vaunt transparency as a value in itself. But none delves into the complexity of the issue or grapples with the notion that transparency might create inefficiency. Consider a hypothetical AI system that improves the accuracy of a diagnostic test for a fatal medical condition by one percent. Without the technology, there is a 90 percent chance of making an accurate diagnosis; with it, there is a 91 percent chance. Are we really willing to condemn one out of 100 people to death just because, although we might have saved him or her, we wouldn’t have been able to explain exactly how we did? On the other hand, if we use the system, nine out of 100 people might feel they’ve been misdiagnosed by an inscrutable golem.

This raises deeper questions about the relationship between humans and technology. The reliance on ever more complex technological tools reduces our autonomy, since no one, not even the people who design these tools, really understands how they work. It is almost axiomatic that as computing has advanced, humans have become increasingly divorced from “ground truth,” the reality of the world that data try to represent but can do so only imperfectly. The new challenge is qualitatively different, however, since AI technologies, at their most advanced levels, do not merely assist human knowledge; they surpass it.

BRAVE NEW WORLD

A sense of respect for the human mind and humility about its limitations runs through the essays in Possible Minds. “As platforms for intelligence, human brains are far from optimal,” notes Frank Wilczek, a Nobel laureate in physics. At the same time, the book is filled with a healthy deprecation of the glistening new tool. “Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force,” writes the computer scientist Alex Pentland.

So AI is good, but bad, too. It is clever but dim, the savior of civilization and the destroyer of worlds. The mark of genius, as ever, is to carry two contradictory thoughts in one’s mind at the same time.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and over a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print and online, plus audio articles
Subscribe Now
  • KENNETH NEIL CUKIER is Senior Editor at The Economist and a co-author of Big Data: A Revolution That Will Transform How We Live, Work, and Think.
  • More By Kenneth Neil Cukier