Is Technology Artificially Engineering Panic? - 7 minutes read

In recent years, we’ve seen unprecedented levels of public panic in response to a variety of events, including natural disasters, pandemics, and even basic social concerns. When news breaks of some new threat or danger, people react very strongly—sometimes limited to aggressive posts on social media, and sometimes manifesting in grocery stores, with emptied shelves and agonizingly long lines.

To be clear, there are some serious threats out there, and we can’t afford to take them lightly. Concern is perfectly justified, but panic is rarely excusable, and almost never productive.

But even understanding this, we’re forced to wonder if our access to technology is engineering more panic than is warranted—if not for this event, then for similar events in the future. Is our access to information and constant digital contact with others intensifying our reactions?

The Consequences of Panic

First, we should understand the consequences of excessive panic. You could make the argument that it’s better to overreact than underreact, especially in specific circumstances, like in a pandemic; it’s better for people to stay home longer than usual than to go out and risk increasing eventual death rates. This is a good argument, but it intentionally neglects the harmful consequences of overreactions.

Let’s say, after news of a pandemic or an impending natural disaster, store shelves around the country stand empty or partially depleted. Is this because people are buying a reasonable amount of groceries and paper products for their families? Not usually. Instead it’s because people are panicking and hoarding far more than they need—either because they want to sell these products for a profit later or because they’re afraid someone else is going to beat them to it. Excessive panic makes it harder for the people who are truly sick or suffering to get the supplies they need to survive and be comfortable.

Additionally, excessive panic has a crippling effect on mental health. Hit up any social media channel, and I guarantee you’ll find at least a handful of people who are posting almost constantly, and expressing fear and anxiety about whatever the latest news is—even if they and most of the people they know are in a relatively low risk category. While it’s perfectly rational to be worried about the future in these situations, if it’s causing you to lose sleep and miss meals, the panic is probably doing more damage than the root cause of the fear.

That’s not even mentioning the effects of panic on the economy. Some outbreaks and natural events can impact supply chains all over the world, so lower economic projections for months, or even years, are justified. But the wave of panic selling and market volatility that often follows simply creates economic turmoil for its own sake.

Excessive panic is harmful—and it’s our own technology that’s responsible for it.

Access to (Mis)information

Let’s start with exploring the positives. Social media, news websites, and other resources have made it easy for people of all backgrounds to quickly, easily, and cheaply find information. This is incredibly valuable, and is a major reason why we’re able to respond quickly to most troubling events. Even better, high-authority sources have a higher probability of ranking highly in Google Search—meaning more authoritative information gets a visibility boost.

However, the same channels that can be used to distribute information can also be used to distribute misinformation. That’s not to say that there are bad actors out there spreading “fake news” about the virus. On the contrary, most misinformation originates with good intentions.

There are two big problems affecting information accuracy and availability. First, there’s the availability of information, coupled with the demand for information. In the early stages of a developing crisis, there are many things we simply don’t know. However, with the abundance of anxious people, desperate to know more, there is pressure to report new facts. Outlets report whatever they can, which oftentimes includes incomplete or misleading pieces of information.

Second, there’s the distortion effect that kicks in when you have many nodes of content development working together to produce and distribute information. Every time a source changes hands, the raw information immediately available gets slightly more distorted.

For example, you might have a team of expert scientists reviewing information on the phenomenon as it currently stands. Then, you have a pop science news outlet that reports on that report. Then, you have a mainstream news outlet that reports on that report, and a Facebook channel that posts the news’ report. From there, users who only read the headline without doing any more due diligence start to share their own opinions, and their connections take their word for truth. Before you know it, you’re eight links deep in the chain, with facts spouted that have little to no resemblance to the original reports.

With no bad actors, it’s possible for extreme rumors to generate traction—and send the general population into a panic about the future.

Optimizing for Outrage

It doesn’t help that most social media and online media channels are optimized for clickbait, outrage and other strong emotional responses.

Here’s how the effect works. Social media platforms are developed to survive. Regardless of whether they’re trying to make money with advertising or just try to attract and retain as many users as possible, these platforms need to keep users using the platform. Every decision they make is designed to get people interacting with each other and remaining on the platform for as long as possible.

Most developers create algorithms that can be used to selectively prioritize content in a newsfeed based on its ability to attract more engagements. This makes sense from a business perspective, and is innocuous in theory.

But in practice, these algorithms preferentially select content for newsfeed visibility based on their ability to evoke a strong emotional response. In other words, content that induces panic is going to take priority over content that encourages moderation and reason, every time. Put people in a position where social media is their primary means of interaction, and soon, everyone is going to be almost exclusively exposed to headlines and comment threads that induce outrage, fear, and anxiety.

Echo Chambers

Then, we need to talk about echo chambers. Echo chambers aren’t entirely new phenomena, but they’ve been exacerbated because of the unlimited reach of the internet; no matter what kind of opinion you hold or what kind of subculture you’re part of, you’re going to be able to find people who agree with you.

When you do find people who agree with you, and reaffirm your existing beliefs, you’re naturally inclined to engage with them more often. You subscribe to subforums that already agree with your stance. You remove friends and followers who don’t agree with your stance. You perform searches phrased in a way that heightens your existing confirmation bias.

In other words, if you’re panicking about a forthcoming natural disaster or pandemic, you’re probably selectively engaging with other people who are panicking about it, whether you realize it or not. This can create a feedback loop that makes people enhance each other’s panic—regardless of whether or not it’s justified. In extreme cases, you may selectively ignore someone who disagrees with you, and permanently; for example, Facebook makes it easy to “unfriend” or “unfollow” someone who comments on your panic thread about the possibility that you’re overreacting. Once their voice is extinguished from your feed, you can more freely engage with the ideas you’ve already established.

None of the statements made in this article are intended to undermine the severity of pandemics, natural disasters, or similar events, or imply that extensive measures aren’t necessary to respond to these threats. However, we need to be acutely aware of the role that social media and similar technologies play in our lives, especially as those technologies become more abundant and more powerful. Artificial panic could hurt us more than we’re inclined to realize—if not now, then in the future.


Powered by