A hundred humanoid communication robots called Robi perform a synchronized dance during a promotional event called 100 Robi, for the Weekly Robi Magazine, in Tokyo January 20, 2015.
A hundred humanoid communication robots called Robi perform a synchronized dance during a promotional event called 100 Robi, for the Weekly Robi Magazine, in Tokyo January 20, 2015.
Yuya Shino / Reuters

Robots have the potential to greatly improve the quality of our lives at home, at work, and at play. Customized robots working alongside people will create new jobs, improve the quality of existing jobs, and give people more time to focus on what they find interesting, important, and exciting. Commuting to work in driverless cars will allow people to read, reply to e-mails, watch videos, and even nap. After dropping off one passenger, a driverless car will pick up its next rider, coordinating with the other self-driving cars in a system designed to minimize traffic and wait times—and all the while driving more safely and efficiently than humans.

Yet the objective of robotics is not to replace humans by mechanizing and automating tasks; it is to find ways for machines to assist and collaborate with humans more effectively. Robots are better than humans at crunching numbers, lifting heavy objects, and, in certain contexts, moving with precision. Humans are better than robots at abstraction, generalization, and creative thinking, thanks to their ability to reason, draw from prior experience, and imagine. By working together, robots and humans can augment and complement each other’s skills.

A robot in the Robotic Kitchen prototype created by Moley Robotics cooks a crab soup at the company's booth at the world's largest industrial technology fair, the Hannover Messe, in Hanover, April 13, 2015.
Wolfgang Rattay / Reuters

Still, there are significant gaps between where robots are today and the promise of a future era of “pervasive robotics,” when robots will be integrated into the fabric of daily life, becoming as common as computers and smartphones are today, performing many specialized tasks, and often operating side by side with humans. Current research aims to improve the way robots are made, how they move themselves and manipulate objects, how they reason, how they perceive their environments, and how they cooperate with one another and with humans.

Creating a world of pervasive, customized robots is a major challenge, but its scope is not unlike that of the problem computer scientists faced nearly three decades ago, when they dreamed of a world where computers would become integral parts of human societies. In the words of Mark Weiser, a chief scientist at Xerox’s Palo Alto Research Center in the 1990s, who is considered the father of so-called ubiquitous computing: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Computers have already achieved that kind of ubiquity. In the future, robots will, too.

YOUR OWN PERSONAL ROBOT

Computers have already achieved ubiquity in everyday life. In the future, robots will, too.
A robot’s capabilities are defined by what its body can do and what its brain can compute and control. Today’s robots can perform basic locomotion on the ground, in the air, and in the water. They can recognize objects, map new environments, perform “pick-and-place” operations on an assembly line, imitate simple human motions, acquire simple skills, and even act in coordination with other robots and human partners. One place where these skills are on display is at the annual RoboCup, a robot soccer World Cup, during which teams of robots coordinate to dribble, pass, shoot, and score goals.

This range of functionality has been made possible by innovations in robot design and advances in the algorithms that guide robot perception, reasoning, control, and coordination. Robotics has benefited enormously from progress in many areas: computation, data storage, the scale and performance of the Internet, wireless communication, electronics, and design and manufacturing tools. The costs of hardware have dropped even as the electromechanical components used in robotic devices have become more reliable and the knowledge base available to intelligent machines has grown thanks to the Internet. It has become possible to imagine the leap from the personal computer to the personal robot.

Self-driving cars could reduce the number of automobiles on the road by around 80 percent, decreasing travel times and pollution.
In recent years, the promise of robotics has been particularly visible in the transportation sector. Many major car manufacturers have announced plans to build self-driving cars and predict that they will be able to sell them to consumers by 2020. Google’s self-driving cars have now driven close to two million miles with only 11 minor accidents, most of them caused by human error; the company will begin testing the cars on public roads this summer. Several universities around the world have also launched self-driving-car projects. Meanwhile, California, Florida, Michigan, and Nevada have all passed legislation to allow autonomous cars on their roads, and many other state legislatures in the United States are considering such measures. Recently, an annual report by Singapore’s Land Transportation Authority predicted that “shared autonomous driving”—fleets of self-driving cars providing customized transportation—could reduce the number of cars on the road by around 80 percent, decreasing travel times and pollution.

Robot, you can drive my car: Google’s self-driving cars, May 2014.
Eric Risberg / Courtesy AP
Self-driving cars would not merely represent a private luxury: as the cost of producing and maintaining them falls, their spread could greatly improve public transportation. Imagine a mass transit system with two layers: a network of large vehicles, such as trains and buses, that would handle long-distance trips and complementary fleets of small self-driving cars that would offer short, customized rides, picking up passengers at major hubs and also responding to individual requests for rides from almost anywhere. In 2014, the Future Urban Mobility project, which is part of the Singapore-MIT Alliance for Research and Technology, invited the public to ride on self-driving buggies that resembled golf carts at the Chinese Garden in Singapore, a park with winding alleys surrounded by trees, benches, and strolling people. More than 500 people took part. The robotic vehicles stayed on the paths, avoided pedestrians, and brought their passengers to their selected destinations.

So far, that level of autonomous-driving performance has been possible only in low-speed, low-complexity environments. Robotic vehicles cannot yet handle all the complexities of driving “in the wild,” such as inclement weather and complex traffic situations. These issues are the focus of ongoing research.

AS YOU LIKE IT

The broad adoption of robots will require a natural integration of intelligent machines into the human world rather than an integration of humans into the machines’ world. Despite recent significant progress toward that goal, problems remain in three important areas. It still takes too much time to make new robots, today’s robots are still quite limited in their ability to perceive and reason about their surroundings, and robotic communication is still quite brittle.

Many different types of robots are available today, but they all take a great deal of time to produce. Today’s robot bodies are difficult to adapt or extend, and thus robots still have limited capabilities and limited applications. Rapidly fabricating new robots, add-on modules, fixtures, and specialized tools is not a real option, as the process of design, assembly, and programming is long and cumbersome. What’s needed are design and fabrication tools that will speed up the customized manufacturing of robots. I belong to a team of researchers from Harvard, MIT, and the University of Pennsylvania currently working to create a “robot compiler” that could take a particular specification—for example, “I want a robot to tidy up the room”—and compute a robot design, a fabrication plan, and a custom programming environment for using the robot.

Bipedal humanoid robot "Atlas", primarily developed by the American robotics company Boston Dynamics, is presented to the media in Hong Kong, October 2013.
Tyrone Siu / Courtesy Reuters
Better-customized robots would help automate a wide range of tasks. Consider manufacturing. Currently, the use of automation in factories is not uniform across all industries. The car industry automates approximately 80 percent of its assembly processes, which consist of many repeatable actions. In contrast, only around ten percent of the assembly processes for electronics, such as cell phones, are automated, because such products change frequently and are highly customized. Tailor-made robots could help close this gap by reducing setup times for automation in industries that rely on customization and whose products have short life cycles. Specialized robots would know where things are stored, how to put things together, how to interact with people, how to transport parts from one place to another, how to pack things, and how to reconfigure an assembly line. In a factory equipped with such robots, human workers would still be in control, and robots would assist them.

DOES NOT COMPUTE

A second challenge involved in integrating robots into everyday life is the need to increase their reasoning abilities. Today’s robots can perform only limited reasoning due to the fact that their computations are carefully specified. Everything a robot does is spelled out with simple instructions, and the scope of the robot’s reasoning is entirely contained in its program. Furthermore, a robot’s perception of its environment through its sensors is quite limited. Tasks that humans take for granted—for example, answering the question, “Have I been here before?”—are extremely difficult for robots. Robots use sensors such as cameras and scanners to record the features of the places they visit. But it is hard for a machine to differentiate between features that belong to a scene it has already observed and features of a new scene that happens to contain some of the same objects. In general, robots collect too much low-level data. Current research on machine learning is focused on developing algorithms that can help extract the information that will be useful to a robot from large data sets. Such algorithms will help a robot summarize its history and thus significantly reduce, for example, the number of images it requires to answer that question, “Have I been here before?”

Robots also cannot cope with unexpected situations. If a robot encounters circumstances that it has not been programmed to handle or that fall outside the scope of its capabilities, it enters an “error” state and stops operating. Often, the robot cannot communicate the cause of the error. Robots need to learn how to adjust their programs so as to adapt to their surroundings and interact more easily with people, their environments, and other machines.

Today, everyone with Internet access—including robots—can easily obtain incredible amounts of information. Robots could take advantage of this information to make better decisions. For example, a dog-walking robot could find weather reports online and then consult its own stored data to determine the ideal length of a walk and the optimal route: perhaps a short walk if it’s hot or raining, or a long walk to a nearby park where other dog walkers tend to congregate if it’s pleasant out.

ROBOT’S LITTLE HELPER

The integration of robots into everyday life will also require more reliable communication between robots and between robots and humans. Despite advances in wireless technology, impediments still hamper robot-to-robot communication. It remains difficult to model or predict how well robots will be able to communicate in any given environment. Moreover, methods of controlling robots that rely on current communications technologies are hindered by noise—extraneous signals and data that make it hard to send and receive commands. Robots need more reliable approaches to communication that would guarantee the bandwidth they need, when they need it. One promising new approach to this problem involves measuring the quality of communication around a robot locally instead of trying to predict it using models.

Humanoid communication robot Kirobo shakes hands with Tomotaka Takahashi, CEO of Robo Garage Co, June 26, 2013.
Humanoid communication robot Kirobo shakes hands with Tomotaka Takahashi, CEO of Robo Garage Co, June 26, 2013.
Toru Hanai / Reuters
Communication between robots and people is also currently quite limited. Although audio sensors and speech-recognition software allow robots to understand and respond to basic spoken commands (“Move to the door”), such interactions are both narrow and shallow in terms of scope and vocabulary. More extensive human-robot communication would enable robots to ask humans for help. It turns out that when a robot is performing a task, even a tiny amount of human intervention completely changes the way the robot deals with a problem and greatly empowers the machine to do more. My research group at MIT’s Computer Science and Artificial Intelligence Laboratory recently developed a system that allowed groups of robots to assemble IKEA furniture. The robots worked together as long as the parts needed for the assembly were within reach. When a part, such as a table leg, was out of reach, a robot could recognize the problem and ask humans to hand it the part using English-language sentences. After receiving the part, the robots were able to resume the assembly task. A robot’s ability to understand error and enlist human help represents a step toward more synergistic collaborations between humans and robots.

DOMO ARIGATO, MR. ROBOTO

Personal computers, wireless technology, smartphones, and easy-to-download apps have already democratized access to information and computation and transformed the way people live and work. In the years to come, robots will extend this digital revolution further into the physical realm and deeper into everyday life, with consequences that will be equally profound.
Current research in robotics is pushing the boundaries of what robots can do and aiming for better solutions for making them, controlling them, and increasing their ability to reason, coordinate, and collaborate. Meeting these challenges will bring the vision of pervasive robotics closer to reality.

In a robot-rich world, people may wake up in the morning and send personal-shopping robots to the supermarket to bring back fruit and milk for breakfast. Once there, the robots may encounter people who are there to do their own shopping but who traveled to the store in self-driving cars and who are using self-driving shopping carts that take them directly to the items they want and then provide information on the freshness, provenance, and nutritional value of the goods—and that can also help visually impaired shoppers navigate the store safely. In a retail environment shaped by pervasive robotics, people will supervise and support robots while offering customers advice and service with a human touch. In turn, robots will support people by automating some physically difficult or tedious jobs: stocking shelves, cleaning windows, sweeping sidewalks, delivering orders to customers.

Personal computers, wireless technology, smartphones, and easy-to-download apps have already democratized access to information and computation and transformed the way people live and work. In the years to come, robots will extend this digital revolution further into the physical realm and deeper into everyday life, with consequences that will be equally profound.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and over a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print and online, plus audio articles
Subscribe Now
  • DANIELA RUS is Professor of Electrical Engineering and Computer Science and Director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.
  • More By Daniela Rus