Our discussion started by focusing on research studies on a strain of avian influenza (H5N1) that is highly pathogenic in humans, but apparently lacks the ability to efficiently transmit between humans. A concern is that increased human-to-human transmissibility could evolve in a natural setting, potentially resulting in a global pandemic. To investigate this concern, researchers evolved a new version of this strain in the lab that has acquired the ability to readily transmit between ferrets, a model organism for this type of work. In our discussion, the overall tone towards these studies was one of concern, with some people questioning whether they should have been done at all. Because there are devastating consequences to this strain being accidentally or intentionally released from a research lab (potentially seeding a global pandemic), the benefits of the research should outweigh the risks. But can we accurately evaluate the risks, given that the potential for human error or malfeasance is difficult to predict and the potential consequences so large? And aside from release from labs where this research is approved, what about the risk of other scientists or non-scientists synthesizing the genome of the strain themselves? The potential benefits were also called into question. Though the scientists who conducted the studies claim their findings will help inform surveillance efforts, many countries of interest apparently do not have good enough surveillance infrastructure to make this effective. And although these studies identified one way transmission could evolve in ferrets in the lab, there was doubt whether it would be likely to evolve the same way in humans in a natural setting.
Synthesizing a morphine precursor in yeast
We also discussed a recent study where the authors engineered a strain of yeast capable of synthesizing a precursor to morphine (http://science.sciencemag.org/content/349/6252/1095). Further development of this system could dramatically cuts down on the time and cost of production of the morphine precursor, making painkillers more available worldwide, but some have raised the concern it could be used in an unregulated way to produce illegal drugs (https://www.theguardian.com/science/2015/aug/13/yeast-cells-genetically-modified-to-create-morphine-like-painkiller).
What are actual examples where dual-use research has gone wrong?
The group brought up a few examples, including the 1977 influenza epidemic, the origin of which is unclear, but is thought to have arisen either from a misguided vaccine trail or a lab accident (http://mbio.asm.org/content/6/4/e01013-15.full). Another example was an outbreak of foot and mouth disease in England that may have resulted from accidental release from the lab (https://en.wikipedia.org/wiki/Foot-and-mouth_disease#United_Kingdom_2007)
Our discussion also focused on the publication of a newly discovered botulinum toxin without a known antidote at the time of publication. We were surprised at the apparent lack of concern of government regulatory agencies about the potential for this toxin to be used in a harmful way. The lab that discovered the new toxin was more concerned and were reluctant to share the bacteria expressing this toxin with other labs, which ultimately identified an effective antidote. This topic lead us to the question…
Who should decide whether dual-use research should be conducted?
We agreed that we need to make this decision as a society, and that regulations should probably be enforced at multiple levels (e.g., awarding grants, publication of articles) since no level is perfect. Certain areas of dual-use research (e.g., fracking) are probably easier to control by standard governance than others (e.g., influenza transmissibility, geoengineering), the latter of which could have immediate irreversible impacts.