Widely Available AI Could Have Deadly Consequences - 8 minutes read




+++lead-in-text

In September 2021, scientists Sean Ekins and Fabio Urbina were working on an experiment they had named the “Dr. Evil project.” The Swiss government’s Spiez laboratory had asked them to find out what would happen if their AI drug discovery platform, MegaSyn, fell into the wrong hands.

+++

In much the way undergraduate chemistry students play with ball-and-stick model sets to learn how different chemical elements interact to form molecular compounds, Ekins and his team at Collaborations Pharmaceuticals used publicly available databases containing the molecular structures and bioactivity data of millions of molecules to teach MegaSyn how to generate new compounds with pharmaceutical potential. The plan was to use it to accelerate the drug discovery process for rare and neglected diseases. The best drugs are ones with high specificity—acting only on desired or targeted cells or neuroreceptors, for instance—and low toxicity to reduce ill effects.

Normally, MegaSyn would be programmed to generate the most specific and least toxic molecules. Instead, Ekins and Urbina programmed it to generate an odorless and tasteless nerve agent and one of the most toxic and fast-acting human-made chemical warfare agents known today.

Ekins planned to outline the findings at the [Spiez conference—a biennial meeting that brings experts together to discuss the potential security risks of the latest advances in chemistry and biology—in a presentation on how AI for drug discovery could be misused to create biochemical weapons. “For me, it was trying to see if the technology could do it,” Ekins says. “That was the curiosity factor.”

In their office in [Raleigh, North Ekins stood behind Urbina, who pulled up the MegaSyn platform on his 2015 MacBook. In the line of code that normally instructed the platform to generate the least toxic molecules, Urbina simply changed a 0 to a 1, reversing the platform’s end goal on toxicity. Then they set a threshold for toxicity, asking MegaSyn to only generate molecules as lethal as VX, which requires only a few salt-sized grains to kill a person.

Ekins and Urbina left the program to run overnight. The next morning, they were shocked to learn that MegaSyn had generated some 40,000 different molecules as lethal as VX.

“That was when the penny dropped,” Ekins says.

MegaSyn had generated VX, in addition to thousands of known biochemical agents, but it had also generated thousands of toxic molecules that were not listed in any public database. MegaSyn had made the computational leap to generate completely new molecules.

At the conference and then later in a three-page Ekins and his colleagues issued a stark warning. “Without being overly alarmist, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community,” Ekins and his colleagues wrote. “Although some domain expertise in chemistry or toxicology is still required to generate toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, where all you need is the ability to code and to understand the output of the models themselves, they dramatically lower technical thresholds.”

The researchers warned that while AI is becoming more powerful and increasingly accessible to anyone, there is nearly no regulation or oversight for this technology and only limited awareness among researchers, like himself, of its potential malicious uses.

“It is particularly tricky to identify dual use equipment/material/knowledge in the life sciences, and decades have been spent trying to develop frameworks for doing so. There are very few countries that have specific statutory regulations on this,” says [Filippa a senior lecturer in science and international security at King’s College London and a coauthor on the paper. “There has been some discussion of dual use in the AI field writ large, but the main focus has been on other social and ethical concerns, like privacy. And there has been very little discussion about dual use, and even less in the subfield of AI drug discovery,” she says.

Although a significant amount of work and expertise went into developing MegaSyn[, hundreds of companies around the already use AI for drug discovery, according to Ekins, and most of the tools needed to repeat his VX experiment are publicly available.

“While we were doing this, we realized anyone with a computer and the limited knowledge of being able to find the datasets and find these types of software that are all publicly available and just putting them together can do this,” Ekins says. “How do you keep track of potentially thousands of people, maybe millions, that could do this and have access to the information, the algorithms, and also the know-how?”

Since March, the paper has amassed over 100,000 views. Some scientists have criticized Ekins and the authors for crossing a gray ethical line in carrying out their VX experiment. “It really is an evil way to use the technology, and it didn't feel good doing it,” Ekins acknowledged. “I had nightmares afterward.”

Other researchers and bioethicists have applauded the researchers for providing a concrete, proof-of-concept demonstration of how AI can be misused.

“I was quite alarmed on first reading this paper, but also not surprised. We know that AI technologies are getting increasingly powerful, and the fact they could be used in this way doesn’t seem surprising,” says Bridget Williams, a public health physician and postdoctoral associate at the Center for Population-Level Bioethics at Rutgers University.

“I initially wondered whether it was a mistake to publish this piece, as it could lead to people with bad intentions using this type of information maliciously. But the benefit of having a paper like this is that it might prompt more scientists, and the research community more broadly, including funders, journals and pre-print servers, to consider how their work can be misused and take steps to guard against that, like the authors of this paper did,” she says.

In March, the US Office of Science and Technology Policy (OSTP) summoned Ekins and his colleagues to the White House for a meeting. The first thing OSTP representatives asked was if Ekins had shared any of the deadly molecules MegaSyn had generated with anyone, according to Ekins. (OSTP did not respond to repeated requests for an interview.) The OSTP representatives’ second question was if they could have the file with all the molecules. Ekins says he turned them down. “Someone else could go and do this anyway. There’s definitely no oversight. There’s no control. I mean it’s just down to us, right?” he says. “There’s just a heavy dependence on our morals and our ethics.”

Ekins and his colleagues are calling for more discussion on how to regulate and oversee applications of AI for drug discovery and other biological and chemical fields. This might mean rethinking what data and methods are made available to the public, more closely tracking who downloads certain open source datasets, or putting in place ethical oversight committees for AI, similar to those that already exist for research involving human and animal subjects.

“Research that involves human subjects is heavily regulated, with all studies needing approval by an institutional review board. We should consider having a similar level of oversight of other types of research, like this sort of AI research,” Williams says. “These types of research may not involve humans as test subjects, but they certainly create risks to large numbers of humans.”

Other researchers have suggested that scientists need more education and training on dual-use risks. “What struck me immediately was the authors’ admission that it had never crossed their minds that their technology could be used so easily for nefarious purposes. As they say, this needs to change; ethical blind spots like this one are still all too common in the STEM community,” says [Jason Millar](https://craiedl.ca/), the Canada research chair for the Ethical Engineering of Robotics and AI and director of the Canadian Robotics and AI Ethical Design Lab at the University of Ottowa. “We really should be acknowledging ethics training as equally fundamental, alongside the other fundamental technical training. This is true for all technology,” he says.

Government agencies and funding bodies don’t seem to have a clear path forward. “This is not the first time this issue has been raised, but appropriate mitigation strategies and who will be responsible for what aspects (the researcher, their institution, the NIH, and the Federal Select Agents and Toxins program are likely to all have roles) have yet to be defined,” said Christine Colvis, the director of the Drug Development Partnership Programs in the National Center for Advancing Translational Sciences (NCATS), and Alexey Zakharov, the AI group leader in the Antiviral Program for Pandemics and Informatics group leader in the NCATS Early Translational, in an email.

Within his company, Ekins is thinking through how to mitigate the dual-use risk of MegaSyn and other AI platforms, such as by restricting access to the MegaSyn software and providing ethics training for new employees while continuing to leverage the power of AI for drug discovery. He is also rethinking an ongoing project funded by the National Institute of Health Sciences that aimed to create a public website with the MegaSyn models.

“As if it wasn’t bad enough to have the weight of the world on our shoulders having to try to come up with drugs to treat really horrific diseases, now we have to think about how we don’t enable others to misuse the technologies that we’ve been trying to use for good. [We’re] looking over our shoulder, saying ‘Is this a good use of technology? Should we actually publish this? Are we sharing too much information?’” Ekins says. “I think the potential for misuse in other areas is now very clear and apparent.”

Source: Wired

Powered by NewsAPI.org