Attendees are seen on a black and white FLIR high definition camera monitor displayed at the 7th annual Border Security Expo in Phoenix, Arizona March 12, 2013.
Attendees are seen on a black and white FLIR high definition camera monitor displayed at the 7th annual Border Security Expo in Phoenix, Arizona March 12, 2013.
Joshua Lott / Reuters

In December 2010, I attended a training session for an intelligence analytics software program called Palantir. Co-founded by Peter Thiel, a techno-libertarian Silicon Valley billionaire, Palantir is a slick tool kit of data visualization and analytics capabilities marketed to and widely used by the NSA, the FBI, the CIA, and other U.S. national security and policing institutions.

The training session took place in Tyson's Corner, in Washington, D.C., at a Google-esque office space complete with scooters, a foosball table, and a kitchen stocked with energy drinks. I was taking the course to explore the potential uses of the tool for academic research.

The dashboard for the New York Police Department's 'Domain Awareness System' (DAS) is seen in New York May 29, 2013.
Shannon Stapleton / Reuters
We spent the day conducting a demonstration investigation. We were first given a range of data sets and, one by one, we uploaded them into Palantir. Each data set showed us a new analytic capability of the program: thousands of daily intelligence reports were disaggregated to their core pieces of information and correlated with historical data; satellite images were overlaid with socio-economic, air strike, and IED data. And in this process, the promise of Palantir was revealed: with more data comes greater clarity. For analysts who spend their days struggling to interpret vast streams of data, the Palantir demo was an easy sell.

In our final exercise, we added surveillance data detailing the planned movements of a suspected insurgent. Palantir correlated the location and time of these movements with the planned movements of a known bomb maker. And there the training ended. It was quite obvious that the next step, in “real life,” would be violent. The United States would send in a drone or Special Forces team. We in the demo, on the other hand, just went home.

This program raises many challenging questions. Much of the data used was inputted and tagged by humans, meaning that it was chock full of human bias and errors. The algorithms on which the system is built are themselves coded by humans, so they too are subjective. Perhaps most consequentially, however, although the program being demonstrated was intended to inform human decision-making, that need not be the case. Increasingly, such tools, and the algorithms that power them, are being used to automate violence.

For governments, the promise is control over a digital space that is increasing decentralized and complex. But it may come at the cost of the very legitimacy on which we give the state its power.
Palantir, which takes its name from the legendary “seeing stone” in J.R.R. Tolkien’s Lord of the Rings, is just the latest iteration of the age-old myth of an all-knowing crystal ball. That myth underlies both the rapid expansion of state surveillance and the increasing use of algorithms and artificial intelligence to fight and to govern. For governments, the promise is control over a digital space that is increasing decentralized and complex. But it may come at the cost of the very legitimacy on which we give the state its power.

THE AUTOMATED STATE

Palantir is a window into the state’s thinking about technology. Threatened by the increasing power of perceived nefarious digital actors, Western states have sought to control the network itself—to as they claim in documents leaked by Edward Snowden, “Collect it All; Process it All; Exploit it All; Partner it All; Sniff it All; Know it All.”

Various types of directional antennas are pictured on the roof of a skyscraper in Berlin, November 5, 2013.
Various types of directional antennas are pictured on the roof of a skyscraper in Berlin, November 5, 2013.
Fabrizio Bensch / Reuters
The problem, of course, is that digital omniscience is incredibly difficult to accomplish. To even aspire to it, one needs two things: a huge amount of data and the tools to give these data meaning. 

First, the massive amount of data. From the Snowden leaks, we know that the U.S. government is tapping into the backbones of our communications systems, servers, and transatlantic wires. It is sniffing wireless signals in cities and implementing broad online and telecoms data mining activities. But this is only the tip of the iceberg.

Wide-area surveillance tools are capable of recording high-resolution imagery of vast areas below them. Starting in 2004, the United States has deployed 65 Lockheed Martin blimps in Afghanistan that provide real-time video and audio surveillance across 100 square kilometers (just over 38 square miles) at a time. These Persistent Threat Detection Systems can record activity below them for periods of up to 30 days. Meanwhile on the ground, vast networks of cameras in our cities are being networked together in police databases and control centers, such as the NYPD Real-Time Crime Center, which processes data from over 6,000 surveillance cameras, as well as license plate readers which provide real-time tracking of vehicle movement. 

And, of course, Silicon Valley is in the mix. A company called Planet Labs has recently deployed a network of 100 toaster-sized satellites that will take daily high-resolution images of everywhere on earth. The goal is to launch thousands—a persistent near-real-time surveillance tool, available to anyone online. They call these satellites Doves. A driverless Google car collects nearly 1 GB of data a second about the world around it, and the Internet of things is bringing data collection into our homes. A warning came with a recent Samsung smart TV about discussing “personal or other sensitive information” in its vicinity, as it could be transferred to a third party.

What we are in the process of building is a vast real-time, 3-D representation of the world. A permanent record of us. But where does the meaning in all this data come from?
What we are in the process of building is a vast real-time, 3-D representation of the world. A permanent record of us.

But where does the meaning in all this data come from? For this, one needs ever more complex algorithms, automation, machine learning, and artificial intelligence. Such technologies are powering a wide range of new governance tools that can trace and record movements of people, detect patterns, and ascribe risk to behaviors outside of programmed norms, to predicting future events. 

And increasingly, such algorithms are used to kill. Russia guards five ballistic missile installations with armed one-ton robots, able to travel at speeds of 45 kilometers (about 28 miles) per hour, using radar and a laser range-finder to navigate, analyze potential targets, and fire machine guns without a human pulling the trigger. The Super Aegis 2 automated gun tower can lock onto a human target up to three kilometers (almost two miles) away in complete darkness and automatically fire a machine gun, rocket launcher, or surface-to-air missile. Unmanned aerial vehicles, ranging from autonomous bombers to insect-sized swarm drones, are increasingly able to collect and process data and kill on their own.

The pretense is that these capabilities are reserved for war zones. But the pervasive nature of these tools, combined with the expanding legal mandates of the war on terrorism, means that battlefield capabilities are creeping into domestic policing and governance, often in the legal gray areas of borders. For example, the U.S. Department of Homeland Security tethered a wide area surveillance blimp 2,000 feet above the desert in Nogales, Arizona. On its first night in use, the system identified 30 suspects who were brought in for questioning. There are now calls to redeploy the 65 surveillance blimps used in Iraq and Afghanistan to U.S. Customs and Border Protection to patrol the U.S.-Mexico border.

The consequence of the growing capabilities for algorithmic governance and violence are significant. First, acts of war have become spatially and conceptually boundless. The once legally and normatively established lines between war and peace and between domestic and international engagement are disappearing. 

A flock of pigeons flies with a prototype "parcelcopter" of German postal and logistics group Deutsche Post DHL in Bonn December 9, 2013.
A flock of pigeons flies with a prototype "parcelcopter" of German postal and logistics group Deutsche Post DHL in Bonn December 9, 2013.
Wolfgang Rattay / Reuters
Second, digital representation, and the biases, values, and ambiguities that are built into it, are becoming acts of governance and violence themselves, rather than simply contributors to them. This is leading us to a place of predictive governance, based on unaccountable and often unknowable algorithms. Although the United States currently has a directive that humans must be a part of any fatal decision in war, this ignores all of the algorithm-based decisions that lead up to this ultimate point. If they are biased, flawed, or based on incorrect data, then the human will be just as wrong as the machine.

Third, spaces of dissent in society are being eroded. Those pushing the bounds of what is deemed acceptable behavior are increasingly caught within the grasp of algorithms meant to identify deviancy. We are already seeing changes in behavior among investigative journalists and activists. At a recent Columbia School of Journalism event in a series called “Journalism After Snowden,” the editors of The New York Times, The Washington Post, and Politico detailed the challenges of protecting sources in an environment of increasing state surveillance and the effect it has on their ability to do accountability reporting. Acts of digital civil disobedience are increasingly being targeted and prosecuted not as protest but as terrorism. When punishments are vastly disproportionate to crimes, then an important democratic function is lost.

Finally, as author Daniel Suarez argues, a combination of automated remote force deployment and artificial intelligence could allow the state to kill preemptively and anonymously. This is a path to automated war, and a harbinger of a recentralization of power. A path that requires us to have a serious conversation about the power and accountability of algorithms deployed by both state and corporate actors.

THE PERILS OF ALGORITHMIC GOVERNANCE 

The modern state system is built on a bargain between governments and citizens. States provide collective social goods, and in turn, via a system of norms, institutions, regulations, and ethics to hold this power accountable, citizens give states legitimacy. This bargain created order and stability out of what was an increasingly chaotic global system. 

If algorithms represent a new ungoverned space, a hidden and potentially ever-evolving unknowable public good, then they are an affront to our democratic system.
But in light of newer challenges created by the rise of digital technologies, the state is pushing the bounds of the social contract. In the emerging system, moral codes, social norms, and human judgment are being augmented or replaced by hidden algorithms, placing a tremendous amount of power in the people and public and private institutions that oversee them. 

If algorithms represent a new ungoverned space, a hidden and potentially ever-evolving unknowable public good, then they are an affront to our democratic system, one that requires transparency and accountability in order to function. A node of power that exists outside of these bounds is a threat to the notion of collective governance itself. This, at its core, is a profoundly undemocratic notion—one that states will have to engage with seriously if they are going to remain relevant and legitimate to their digital citizenry who give them their power.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and over a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print and online, plus audio articles
Subscribe Now
  • TAYLOR OWEN is Assistant Professor of Digital Media and Global Affairs at the University of British Columbia and a Senior Fellow at the Columbia Journalism School. He is author, most recently, of Disruptive Power: The Crisis of the State in the Digital Age.
  • More By Taylor Owen