What's the Most Dangerous Emerging Technology? - 16 minutes read




Those inclined to think apocalyptically know that tech, in its purest form, spells civilizational disaster. It is true that we might never see a world filled with violent hypertrophic CRISPR babies, and uncontrollable self-driving cars, and AI intent on twisting humans into paperclips. Our tech-hastened end, if and when it does arrive, will probably look a bit different and will probably suck in ways we cannot yet imagine. In the meantime, though, it’s worth wondering: what’s the most dangerous emerging technology? For this week’s Giz Asks, we reached out to a number of experts to find out.

Zephyr Teachout

Associate Professor, Law, Fordham University

Private workplace surveillance. It upends the already awful employer-employee power dynamics by allowing employers to treat employees like guinea pigs, with vast asymmetries of information, knowing what motivates them to work in unhealthy ways and how they can extract more value for less pay. It allows them to weed out dissidents with early warning systems, and destroy solidarity through differential treatment. Gambling research taught casinos how to put together gambling profiles to customize appeals to be able to earn as much as possible off of each gambler’s weaknesses—that technology, now entering the workforce, is on the verge of ubiquity, unless we stop it.

“Private workplace surveillance. It upends the already awful employer-employee power dynamics by allowing employers to treat employees like guinea pigs...”

Michael Littman

Professor, Computer Science, Brown University

The 2021 AI100 report, released last month, included a section on the most pressing dangers of artificial intelligence (AI). The 17-expert panel expressed the opinion that, as AI systems prove to be increasingly beneficial in real-world applications, they have broadened their reach, causing risks of misuse, overuse, and explicit abuse to proliferate.

One of the panel’s biggest concerns about AI is “techno-solutionism,” the attitude that technology like AI can be used to solve any problem. The aura of neutrality and impartiality that many people associate with AI decision-making results in systems being accepted as objective and helpful even though they may be applied inappropriately and can be built on the results of biased historical decisions or even blatant discrimination. Without transparency concerning either the data or the AI algorithms that interpret it, the public may be left in the dark as to how decisions that materially impact their lives are being made. AI systems are being used in service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. Insufficient thought given to the human factors of AI integration has led to oscillation between mistrust of AI-based systems and over-reliance on those systems. AI algorithms are playing a role in decisions concerning distributing organs, vaccines, and other elements of healthcare, meaning these approaches have literal life-and-death stakes.

The dangers of AI automation are mitigated if, on matters of consequence, the people and organizations responsible for the outcomes play a central role in how AI systems are brought to bear. Engaging all relevant stakeholders can drastically slow the delivery of AI solutions to hard problems, but it’s necessary—the downsides of misapplied technology are too great. Technologists would be well served to adopt a version of the healthcare dictum: first, do no harm.

“AI systems are being used in service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism.”

David Shumway Jones

Professor, Epidemiology, Harvard University

There are clearly many contenders for the title of most dangerous emerging technology. CRISPR and other gene editing technologies could wreak havoc, though they may prove to be less powerful than their proponents promise. Social media has already demonstrated its power to cause far-ranging harms. But the one that bothers me the most is actually the widespread deployment of facial face recognition in surveillance technology. In many ways this technology could be a huge asset to societies. Face recognition could make many transactions more efficient. There would be no need to show an ID or boarding pass at an airport, or no need to offer payment at a store (as long as your image is linked to an online payment platform). Facial recognition could also make society safer by increasing the likelihood that criminal suspects are identified and apprehended. So how could these technologies become dangerous? One fear is existential: our movements will no longer be private. Someone will always have the ability to know where we are and where we have been. Even if no one misuses this information, the loss of privacy and anonymity feels meaningful to me. Another fear is abuse: the risk of misuse of this information is real. Whoever has access to the information could certainly use it for nefarious purposes. Stalkers, from jilted lovers to authoritarian governments, could have a field day with their new ability to monitor where we go and whom we meet, and even to predict what we might do next. And I suspect that I have only imagined a small share of the ways in which these surveillance technologies could be misappropriated.

“One fear is existential: our movements will no longer be private. Someone will always have the ability to know where we are and where we have been.”

Ryan Calo


Professor of Law, Chair, President and Provost of the Task Force on Technology and Society and Faculty Co-founder of the Tech Policy Lab and The Center for an Informed Public at the University of Washington


My candidate for the most dangerous emerging technology is quantum computing. With the possible exception of breaking encryption, the dangers of quantum computing are not new. Rather, quantum computing accelerates threats to privacy and autonomy that began in the era of supercomputing. With access to enough data and processing power, today’s computer systems are increasingly capable of deriving the intimate from the available. I’m worried quantum computing will help shepherd in a world in which every government and company is Sherlock Holmes, guessing all our secrets based on information we don’t even think to hide.

Amy Webb

Author of The Genesis Machine: Our Quest to Rewrite Life in the Age of Synthetic Biology and CEO of the Future Today Institute, a foresight, trends and scenario planning firm that helps leaders and their organizations prepare for complex futures

The most dangerous emerging technology is biology. Or rather, synthetic biology, which has a singular goal: to gain access to cells in order to write new—and possibly better—biological code. Synthetic biology is a field of science that applies engineering, artificial intelligence, genetics, and chemistry to redesign biological parts and organisms with enhanced abilities and new purposes. A series of new biological technologies and techniques, which broadly fall under synthetic biology’s umbrella, will allow us not just to read and edit DNA code but to write it. Which means that soon, we will program living, biological structures as though they were tiny computers.

Synthetic biology allows us to load DNA sequences into software tools. Imagine Word, but for DNA code—with edits just as simple. After the DNA is written or edited to a researcher’s satisfaction, a new DNA molecule is printed from scratch using something like a 3D printer. The technology for DNA synthesis (transforming digital genetic code to molecular DNA) has been improving exponentially. Today’s technologies routinely print out DNA chains several thousand base pairs long that can be assembled to create new metabolic pathways for a cell, or even a cell’s complete genome.

What could go wrong? Soon we will be able to write any virus genome from scratch. That may seem like a frightening prospect, given that SARS-CoV-2, which causes covid-19.... But viruses aren’t necessarily bad. In fact, a virus is just a container for biological code—in the future, we might write beneficial viruses as therapies for cancers or certain diseases.

Synthetic biology will play an important role in our climate crisis and our looming food and water shortage. It will reduce our reliance on animals for protein, and it will eventually personalize medicine. Imagine a future in which your body acts as its own pharmacy.

What makes synthetic biology the most dangerous emerging technology isn’t the science—it’s us humans. We will need to challenge our mental models, to ask difficult questions, and to have rational discussions about the origins of life, or we’ll create risk and miss opportunities. Within the next decade, we will need to make informed decisions without the constant avalanche of misinformation or opportunistic politicians who are more interested in re-election than the public good. We’ll need to use data and evidence—and to place our trust in science—to make key decisions about whether to program novel viruses to fight diseases, what genetic privacy will look like and who should “own” living organisms. Regulators will need to figure out how companies should earn revenue from engineered cells and what processes should be used to contain a synthetic organism in a lab.

You play a critically important role in synthetic biology, too. What choices would you make if you could reprogram your body? Would you agonize over whether—or how—to edit your future children? Would you consent to eating GMOs (genetically modified organisms) if it reduced climate change? The promise of synthetic biology is a future built by the most powerful, sustainable manufacturing platform humanity has ever had. We’re on the cusp of a breathtaking new industrial evolution.

“What makes synthetic biology the most dangerous emerging technology isn’t the science—it’s us humans.”

Jeroen van den Hoven

Professor, Ethics and Technology, Delft University of Technology, and the co-author of Evil Online

I think the most dangerous technologies are in a sense social or cognitive technologies that prevent people from having a clear view of the world and the needs of others. These technologies are often conducive to dehumanization and invite people to turn into self-obsessed and unthinking individuals. They are like fog machines that create the conditions that make it easy to renounce, deny or turn a blind eye to our common humanity and our human responsibility. They are technologies of bad faith. Those who work on them are often instrumentalized by others and naïve, culpably complicit, or masters in orchestrating plausible deniability for all the future misery and suffering that may ensue.

The way digital technologies—in their social and online applications—have helped to create epistemic chaos is I think one of our most serious threats. Risks and dangers of technologies can be easily downplayed, obscured and denied, so that most think there is nothing wrong. Benefits and the blessings of other technologies may be made to look bad. Despots and villains glorified, heroes and saviors demonized. As Voltaire said: “Those who can make you believe absurdities can make you commit atrocities.”

Those who haven’t given up on trying to make up their minds about the best way to fight climate change, to figure out how to fight the pandemic, to keep lethal autonomous weapons at bay, to prevent AI-based triage decisions in hospital or targeted dream incubation in marketing and expose deep fakes, have a real hard time finding out what is true and morally acceptable. Many others, I fear, have given up trying to figure out what we ought to do and have become docile or complacent.

“I think the most dangerous technologies are in a sense social or cognitive technologies that prevent people from having a clear view of the world and the needs of others.”

L. Syd M Johnson

Associate Professor, Center for Bioethics and Humanities, SUNY Upstate Medical organs and tissues from one animal species into another—has long been considered a potential solution to the chronic shortage of transplantable organs. Thousands of people in the US alone are on waiting lists to receive a lifesaving organ. Some will not survive the wait. From the 1960s to the 1990s, there were numerous attempts to transplant organs from nonhuman primates–mostly baboons and chimpanzees–into human recipients. To date, no patient has survived a solid organ xenograft. Some have died within hours, others within days or weeks. Rejection, and potentially catastrophic hyperacute rejection, when the body’s immune system mounts a violent attack on an organ, is one reason.

The risk of rejection increases when species are discordant, like humans and pigs who are separated by 80 million years of evolutionary divergence. But pigs are currently favored as organ sources because they are easily bred, they produce organs that are the right size for humans, and they are killed in the hundreds of millions annually for meat, which some interpret as a moral license to kill them for organs as well.

The evolutionary and genetic proximity of other nonhuman primates to humans increases the risk of zoonotic infection, when a disease jumps from one species to another. The US Food and Drug Administration has effectively banned the use of nonhuman primates for xenotransplantation, citing the unacceptable risk of zoonosis. But pigs also harbor human-similar viruses, and are sources of zoonotic infection. In 1998 and 1999, Nipah virus caused an outbreak of viral encephalitis in pig farmers in Malaysia, after it spilled over from their pigs. 100 people died, and more than a million pigs were culled.

SARS-CoV-2 is a zoonotic disease, one that likely jumped multiple species before infecting a human in a market in China, sparking a global pandemic that has taken millions of lives, devastated health care systems, and caused global social and economic upheaval. SARS-CoV-2 has been found in domesticated dogs, cats, and ferrets; in chimpanzees, gorillas, otters, and big cats in zoos; in captive mink on fur farms (resulting in millions of animals being culled across Europe); and in free-living white-tailed deer in the US.

The danger of zoonosis through xenotransplantation is serious enough that numerous organizations have recommended lifelong surveillance of human recipients, their close contacts, and health care workers involved in xenotransplants. That surveillance is not intended to protect the organ recipient, but to protect public health. The risk of unleashing a new infectious disease on the world changes the stakes of xenotransplantation, and makes it an extremely dangerous emerging technology. In a worst-case scenario–another global pandemic–the consequences could be devastating, and cost millions of lives.

There are other solutions to the organ shortage, some available now—such as increasing the number of living and deceased human donors—and some in development (e.g., growing human organs in vitro, and using 3D bioprinting to repair and regenerate damaged organs in vivo). None of them carry the risk of sparking a global pandemic. Xenotransplantation remains speculative after decades of research and numerous failed transplants. As the third year of the SARS-CoV-2 pandemic approaches, the greatest risk comes into sharp focus, and it is difficult to overstate the dangers of pursuing xenotransplantation.

“The danger of zoonosis through xenotransplantation is serious enough that numerous organizations have recommended lifelong surveillance of human recipients, their close contacts, and health care workers involved in xenotransplants.”

Joanna Bryson

Professor, Ethics and Technology, Hertie School of Governance, Berlin


I think the most dangerous emerging technologies are actually forms of governance. We are learning so much about society and social control. Some nations use this information predominantly for good. But some countries use this to repress and manipulate minorities or even disempowered majorities, and either way this can lead to genocides and brutal atrocities. We need to recognize now that atrocities can include “cultural genocide,” the wiping out of all records and histories of peoples, invalidating their ancestors’ lives and identities while not necessarily killing the people themselves, severely compromising their capacity to flourish. Also, we are seeing enormous acts of destruction levied against the ecosystem, as if future generations matter less than a present autocrat’s (or even recent US president’s) quest for wealth or other transnational political leverage.

The only solution to these problems is to do better at innovating forms of governance that reward and reenforce cooperative behavior that respects fundamental rights. It is difficult that so many technologies are in fact “dual use”—that is, can be used for any purpose. We cannot lie down and be technodeterminists, believing that solutions are out of our hands. Political awareness and engagement at every level of society is essential. And interestingly, contrary to what many think, we do seem to be doing a great job with social media and other tools of raising levels of awareness of the real state of politics and of scientific evidence over what they have been historically. So I believe there is hope, but a lot of work to do.

“We cannot lie down and be technodeterminists, believing that solutions are out of our hands.”

Elizabeth Hildt

Professor, Philosophy, Illinois Institute of Technology


The most dangerous emerging technology is a technology that escapes human control and regulation. Technologies do not emerge somehow miraculously on their own, they are not dangerous in themselves, they do not have dangerous intentions. It is humans who devise, shape, build, and deploy them.

While there has been quite a bit of speculation about the emergence of potential future super-intelligent artificial intelligence technologies that could outrival or dominate humans, I think there are a lot more realistic ways for emerging technologies to escape control.

One way that emerging technologies can escape control is lack of transparency and understanding where the technology functions like a black box, i.e., a technology that is opaque with regard to its decision-making. Another is lack of transparency in the sense that manufacturers or companies do not inform the public correctly or thoroughly about the functioning of a technology and its implications.

Emotional involvement is another route for humans to lose control over technology. The more intuitively and emotionally humans interact with technology, the easier it is to navigate the technology. This is one of the motivations behind designing social and humanoid robots in such a way that humans react to and interact with the embodied AI technology similarly to how they react to and interact with humans. However, attributing emotions, agency and other human-like characteristics to technology that lacks these capabilities can result in one-sided emotional involvement of humans, as well as human-technology interaction that is dominated not by rationality, but by emotional factors. In this type of interaction, humans are the more vulnerable partner.

“Attributing emotions, agency and other human-like characteristics to technology that lacks these capabilities can result in one-sided emotional involvement of humans, as well as human-technology interaction that is dominated not by rationality, but by emotional factors. In this type of interaction, humans are the more vulnerable partner.”

Do you have a burning question for Giz Asks? Email us at tipbox.com.

Source: Gizmodo.com

Powered by NewsAPI.org