Last October, the U.S. government suspended funding for a type of scientific research on avian flu known as “gain-of-function” studies. This kind of research involves creating more contagious forms of highly pathogenic avian flu in the laboratory to learn what genetic changes are required for a flu strain to cause a pandemic. The funding moratorium on such “gain-of-function” studies coincides with a formal “deliberative process,” including a risk-benefit assessment. This is the first time, to our knowledge, that Washington has paused an ongoing program to debate the wisdom of engaging in such research.

The main question is whether the scientific knowledge provided by making flu strains more contagious outweighs the risk of a lab accident or deliberate misuse that could lead to an outbreak of these enhanced viruses. Assessing the risks and benefits is not easy, and scientists and public health experts are sharply divided. In this case, we have argued that the foreseeable benefits are small and that the risks cannot be justified when there are many safer, alternative ways to prepare for pandemics; others vehemently disagree.

Yet such questions are going to arise again and again thanks to technological breakthroughs in genetic engineering. The government’s response to the “gain-of-function” experiments is therefore even more significant. The flu controversy represents an opportunity for the government and the scientific community to establish an effective precedent on an issue that is only going to get more controversial and complicated.

RISKS AND BENEFITS

Scientists' ability to manipulate the genomes of living organisms is expanding at an impressive rate. An important recent development is the CRISPR/Cas9 technique, a system of editing genes in any living organism that is cheap, efficient, and easy to use. It offers the prospect of curing human genetic diseases, making mosquitoes resistant to carrying malaria, and developing hardier crops, among other applications.

The potential benefits are enormous. For some applications, so too are the potential risks. In human genetics, some of the greatest concerns are ethical, arising from the possibility of editing human genes in a way that will be passed on to future generations. New methods for inducing mammalian gene rearrangements using this technology, packaged in a virus, raise other concerns. A respiratory transmissible agent engineered to produce lung cancer, designed as a means for evaluating new treatments, could be a weapon in the wrong hands. Another application that allows the deliberate spread of particular genetic material in plant and animal populations, CRISPR-based “gene drives,” could lead to the accidental spread of unwanted traits, or could be misused by malevolent actors to harm crops and livestock. 

Such questions are going to arise again and again thanks to technological breakthroughs in genetic engineering.
Gene drives and the development of microbes that are both more virulent and more contagious raise the biosafety stakes. No longer are the risks of an accident limited to scientists themselves or their close contacts. Instead, an accidental release of an enhanced pathogen could lead to ongoing transmission that affects whole human populations. 

Given these stakes, laboratory safety is crucial. But accidents happen, even in top laboratories. One reason for the government’s decision to suspend research on enhanced flu was the spate of laboratory mishaps in federal labs last summer. Forgotten vials of smallpox and other dangerous microbes were discovered in an unused storage room at the National Institutes of Health. Dozens of employees at the Centers for Disease Control and Prevention (CDC) were potentially exposed to anthrax after samples were not properly destroyed. And highly virulent bird flu was sent from the CDC to another lab by mistake. Despite the publicity surrounding these incidents, mishaps in high-containment labs have continued, involving Ebola at the CDC, tularemia at Tulane University, and most recently, the unintentional shipment of live anthrax by a U.S. Army lab to hundreds of other labs around the world.

Health officers in full protective gear wait to cross a road near a wholesale poultry market in Hong Kong January, 2014. 
Tyrone Siu / REUTERS

Fortunately, none of these accidents have led to a detectable human infection. But they highlight the fallibility of even the most secure laboratories—and not only in the United States. The United Kingdom has had over 100 mishaps in recent years in its high-containment labs. And details of accidents are commonly suppressed, usually justified by a questionable claim that releasing even minimal details would constitute a security risk. The Dutch National Institute of Public Health and Environment has noted that laboratory accidents are under-reported throughout the world. Given this safety record, the government was wise to suspend funding for gain-of-function research and to announce the deliberative process.

SETTING A PRECEDENT 

So what should scientists do? Three features of the present debate make it a comparatively easy starting point to establish an effective precedent.

First, the debate over experiments to enhance the transmissibility of flu in mammals concerns a well-known pathogen, influenza, with recognized biological properties. The experiments deliberately attempted to increase just one of those properties, transmissibility, in order to understand it better. Yet even when researchers think they know the property for which they are selecting, the results may be surprising, as genetic changes sometimes introduce unanticipated additional properties. The risks in settings like these are therefore difficult to quantify. And they will only get more difficult: Future work in genome engineering and synthetic biology will create organisms with genetic components from many different sources, or indeed with components that are completely unknown in nature. Scientists do not know exactly how overall risks will evolve as the engineering toolbox expands.

Second, the current debate involves expensive, government-funded research that takes place in universities or government laboratories. Funding decisions by government bodies routinely involve balancing safety concerns against potential benefits. For high-risk experiments performed in universities or national labs, funding policies are an effective and relatively straightforward means of oversight. But there is a growing “do-it-yourself” movement in synthetic biology and genome engineering, taking advantage of the falling costs of biological research and motivated by the possibility of valuable contributions by alternative actors. If potentially risky research proceeds without government funding or institutional oversight, it will not be so easily regulated. 

DNA equipment in a lab in China
Equipment used inside a "DNA Lab" are seen at the Beijing Genomics Institute in Shenzhen, southern China, March, 2010.
Bobby Yip / Reuters

Third, the flu controversy involves disagreements about public risks from accidents and about public benefits from improved pandemic response. There has been relatively little interest from pharmaceutical or other companies in the research. Other areas of synthetic biology and genome engineering promise private benefit, through patents and through the ability to generate new, commercially valuable, life forms. How should the freedom of profit-driven scientists and companies to pursue their research be balanced against the risk to the general public from accidental release? The military, too, has shown increasing interest in this area, raising the possibility of dangerous yet classified work where the risks and benefits cannot be publicly evaluated. 

Gene drives and the development of microbes that are both more virulent and more contagious raise the biosafety stakes.
As researchers continue to develop novel technologies with significant potential benefits to science and society, a minority of these will inevitably pose risks beyond the lab to human, animal, or plant populations. Scientists, funders, and governments must develop processes for flagging such potentially risky technologies and assessing their risks at the earliest possible stage—a more prudent and less disruptive approach than stopping research programs already in progress. Assessing the ethical, epidemiological, and ecological risks of new experimental techniques will require cross-training of biologists in these fields. Scientists must acknowledge that there are some experiments that should not be performed, when the risks are greater than the potential benefits of choosing an alternative technique. 

The flu controversy—complex as it is—is a relatively easy test case, and it represents an opportunity to develop an international norm of safety-conscious, responsible deliberation among scientists working in these fields, and others who have a stake. In this case, those stakeholders include bioethicists, public health experts, and physicians; when the risks are to crops or natural populations they may include population biologists and agricultural experts. In a hopeful sign, the developers of CRISPR/Cas9 have been leading the efforts to raise awareness of the risks of such technology, with the goal of achieving responsible regulation. Scientists are also taking the lead in providing factual and accessible information to counteract fears of technologies, such as genetically modified crops, for which the risks have been exaggerated. Regulation must be sensitive to scientific estimates of the risks and benefits.

The future of biological research demands a new approach to governance, greater awareness of risk, and a suite of tools for identifying and regulating certain classes of dangerous experiments—those that pose risks to entire human, animal, or plant populations. However, broad areas of science should not be shut down. On the contrary, effective governance will strengthen the scientific enterprise. Labs should be required to report accidents in a comprehensive and transparent manner, so that the risks of future work can be properly assessed, and so that laboratories can learn from the past mistakes of others. Funders of scientific research should support work on risk mitigation. Governments should develop a process to review advances in science and technology that is anticipatory and includes a provision for reevaluating decisions periodically as the risk–benefit profile of certain types of research changes over time. 

The scientific community and society as a whole have a great deal at stake in finding a solution to the flu controversy now—a solution that creates effective precedents for the harder questions that lie ahead.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and over a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print and online, plus audio articles
Subscribe Now
  • MARC LIPSITCH is Professor of Epidemiology and Director, Center for Communicable Disease Dynamics at Harvard T.H. Chan School of Public Health.
  • DAVID A. RELMAN is Professor of Medicine and of Microbiology & Immunology, and Co-Director, Center for International Security and Cooperation at Stanford University.
  • More By Marc Lipsitch
  • More By David A. Relman