Aldous Huxley would recognize our brave new world. Like software engineers write code to create programs, genetic engineers can use code to create synthetic DNA. These genetic codes start as code on a computer, and that computer can be hacked. The end-result could be anything and everything – a super virus, new parasites, new germs, you name it. Worse, the practice is available and unregulated in parts of the world. Security and law enforcement are behind the curve.
“The cyber-physical nature of biotechnology raises unprecedented security concerns,”
Researchers at Colorado State, VA Tech, and Univ. Nebraska-Lincoln published their research recently exposing the current risks for Cyberbiosecurity, which is the intersection of computer and biomedical security.
I’ve been in a medical drug factory before, and there are almost no people on the factory floor. The process uses giant tanks and tubes and valves, all run by a few people in a control room. Using computers to regulate everything. In the same way that the Stuxnet worm caused gas centrifuges to spin out of control, all while reporting to the control room that everything was fine, the control systems can be hacked and subverted. These factories are critical to our nation’s biodefense infrastructure.
As with the current best practice for all cybersecurity, employee training is seen as key. More than 80% of all hacks start with an employee mistake of some kind. People working in these fields need to be aware of the cyberbiological risks. This awareness and safe practices need to be applied throughout the supply chain, from raw materials to patient outcomes. However, there is a growing sense that employee training is necessary but insufficient, because all it takes is one hole, one mistake to compromise everything. This trend of thinking, joined by long-time cybersecurity expert Bruce Schneier, is leading toward systems that can be secure despite employee malfunctions. It’s summarized by the phrase “Stop trying to fix the users.” https://www.schneier.com/blog/archives/2016/10/security_design.html
Digital representations of genes could be used to create biologic weapons. The CDC used DNA sequences to reconstruct the virus responsible for the Spanish flu epidemic that killed 5% of the world population a century ago. Coming just after WWI, it killed more soldiers than the war.
Security has always played catch-up to the new practices that create risk, in every field, and this holds true for the life sciences. Per the federal CMS/OCR authorities, the growth of medical practices having their infosecurity breached is exponential, more than doubling from 2016 to 2017. https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf
historically, has always played catch-up, and that is no different for the life sciences. “The life sciences community has traditionally operated under an insecure system that expects participants to self-regulate and often does not monitor for security threats,” explain Peccoud, Gallegos, Murch, Buchholz, and Raman. “Now that DNA sequencing, synthesis, manipulation, and storage are increasingly digitized, there are more ways than ever for nefarious agents both inside and outside of the community to compromise security.”
The research authors ended their publication on a somber note, saying:
“The ability to manipulate DNA was once the privilege of the select few and very limited in scope and application. Today, life scientists rely on a global supply chain and a network of computers that manipulate DNA in unprecedented ways. The time to start thinking about the security of the digital/DNA interface is now, not after a new Stuxnet-like cyberbiosecurity breach.”
The researchers work is published at Cell.com, which is run by Elsevier and is behind a paywall.
Brian Allison – INCS May 2018
This blog article used material from TechRepublic: Cyberbiosecurity risks include encoding digitized DNA with malware and compromising computers used for biomanufacturing processes.
By Michael Kassner | January 24, 2018, 12:31 PM PST