Why we can’t blame social networks for our polarized politics - 17 minutes read


As sentiment about big tech companies has worsened, emerging conventional wisdom has held that social networks are primary causes — and accelerants — of polarization in the United States. The rise of social networks has been roughly correlated with the rise of authoritarians here and elsewhere. Surely social networks, with their algorithmic feeds pushing the most emotional posts to the top of our attention, are warping our politics?

New research suggests that this may not be the case. In a working paper published this year, Levi Boxell, Matthew Gentzkow, and Jesse Shapiro found that polarization had increased faster in the United States than anywhere else — but that in several large, modernized nations with high internet usage, polarization was actually decreasing.

“One theory this lets us reject is that polarization is a byproduct of internet penetration or digital media usage,” wrote Ezra Klein, my Vox Media colleague, in a piece last month. “Internet usage has risen fastest in countries with falling polarization, and much of the run-up in US polarization predates digital media and is concentrated among older populations with more analogue news habits.”

Ezra expounds on these dynamics in his fascinating new book Why We’re Polarized, discussion of which has dominated both my Twitter and podcast feeds since it came out last month. (I particularly enjoyed his chats with Jill Lepore, Jamelle Bouie, and Ta-Nahesi Coates.) Recently I invited Ezra to answer a few questions I’ve had about the Boxell study and other issues raised by his book, and he was gracious enough to agree.

Recent research doesn’t quite let social networks off the hook. But Ezra’s analysis has challenged some of my own beliefs about Facebook, YouTube, Twitter, and their collective effect on our politics. I hope you enjoy the conversation — and I encourage you to pick up his book here.

Casey Newton: One of the more counterintuitive arguments that you make in your book, at least for me, is that social networks aren’t polarizing in the way that we often think they are. How did you reach that conclusion, and why do you find the research persuasive?

Ezra Klein: Hmmm. I wouldn’t go so far as to say they’re not polarizing. What I’d say is they’re not core to the broad story of polarization, much of which predates social media. There’s also not much evidence for the echo chamber effect, at least not in the way people tend to think of it. I cite, for instance, a well-designed experiment in which Democrats and Republicans on Twitter were paid to follow people from the other side. The exposure made the Republicans more conservative and the Democrats, if anything, more liberal, though the effect wasn’t statistically significant. I also cite some experiments on cable news, which has similar dynamics, where people were forced to watch, and the big finding was that the only people who had their minds changed were those who didn’t want to watch it — when they altered the study design so people could watch something non-political, those folks did, and the persuasive effect melted away.

That’s all to say that the people who read politics Twitter or watch cable news tend to do so because they already know what they believe, and they’re following politics to track whether their candidates, party, or ideas are winning or losing. They’re not easily persuadable.

I found this all surprising because of a conversation I had last year with someone who worked on these issues for one of the big social networks. They told me that the people most likely to post about politics are the most partisan. As a result, if you look at Facebook or Twitter a lot, you tend to only see the most partisan opinions. Over time, this person said, that can’t help but have a polarizing effect.

Is it possible that the research on this subject to date simply hasn’t been done over a long enough time frame to have captured an effect?

I think it does have a polarizing effect, but it’s primarily polarizing because it further polarizes elites, who then act in more polarized ways, which create more polarized choices and situations that the mass public has to respond to. An example of this is impeachment. Donald Trump and Rudy Giuliani got very invested in a Joe Biden conspiracy theory promoted on Breitbart and Fox News. That led them to invest considerable administration resources in trying to prove the conspiracy true, or tar Biden with it. That led to the whistleblower report, and then to the hyper-polarizing impeachment process. So there was a dynamic here in which the ultimate political elite — the president — responded to the polarized political media he consumed by doing something that then everyone else had to respond to. You don’t need a big audience to change American politics, you just need the right audience.

That, I suspect, is more how social media is polarizing: political elites are on Twitter every day, and for all the warnings that Twitter isn’t real life, it feels like real life to them. They’re stuck in a hyper-polarized informational system, and it influences the decisions they make, the candidates they support, the messages they emphasize, the stories they focus on. When people say Twitter isn’t real life, they mean it’s not representative of mass opinion. They’re right! But neither is American politics. Political elites have outsized effect on the structure of politics, and if they become more polarized, and act in more polarized ways, that will ultimately polarize the public simply by presenting them with very polarizing choices to respond to.

One simple way to put this is that the 2020 election looks likely to be between Bernie Sanders and Donald Trump. Both of those candidates have been buoyed by intense social media fandom, in a way that helped them triumph over less controversial, more coalitional, competitors. They present a far starker — and thus more polarizing — choice than, say, Al Gore and George W Bush, who ran their campaigns in ways that muddied the differences between them in 2000. I think social media is part of why politics is selecting for sharper-edged candidates, and that’s leading to a different political reality that even those who aren’t on social media need to face.

Before I started reading your work on this, my instinct was that platforms should take steps to become less polarizing. But you make the point that the mid-century period where American politics were least polarized may have been a historical anomaly — one that emerged from a racist compromise between Democrats and Dixiecrats. In this view, it’s not at all clear that polarization itself is a problem — in a democracy, we’re meant to fight about things!

Let me break that into two pieces. First, the alternative to political polarization is often political suppression — disagreements get suppressed rather than resolved. That’s certainly how the political system treated civil rights for much of the 20th century, bottling up bills in the Rules committee, filibustering them in the US Senate, or neutering them through agreement with the Southern Dixiecrats. But that doesn’t mean polarization is always and everywhere good, nor that the social media networks shouldn’t think about how to reform themselves. I have a longer discussion in the book about the kind of speech and voices that are selected for by algorithms that are sorting off the intensity of emotional response, but I don’t think it’s a great foundation atop which to structure political communication.

You explain how polarization makes governance much harder in the United States, and all the very serious problems that come along with that. But it’s now much less clear to me what platforms ought to do about it, if anything. If platforms found a way to promote agreement and consensus-building, should they? (Twitter has been studying this for almost two years now, with almost nothing to show for it so far.)

In general, I don’t think the platforms are going to fix the problems of American politics. But I do think they could fix the problems of the platforms. People have a million ideas here, but I’ll just state the most obvious: I think the move towards algorithmic feeds that select for content that triggers an intense emotional response is just a bad way to structure communication. I think supercharging our social instincts often brings out the worst, not the best, in us — few look back fondly on the social dynamics of high school cafeterias, and for good reason.

One thing I’ve been doing recently is reading past critics of television, like Neil Postman and Jerry Mander. And something that’s striking about their arguments is they were largely (though not entirely!) right. In many cases, the rise of televisual culture created problems much worse than anything they could’ve imagined, and an approach to politics-as-entertainment that would’ve read like parody if they had included it as a thought experiment in their books. I think there’s a dominant view that every new communications medium comes with critics but we’re still here and the medium prospered so the critics must’ve been wrong, right? No. Sometimes the critics were right, and what they feared came to pass, and we just learned to live with it.

I don’t know what I’d do with these platforms if I ran them. Being inside a system, as I argue in the book, warps your judgment. But to use Twitter’s emphasis on healthy conversations as an example, what if the way to have a healthy conversation is simply inimical to the nature of Twitter — that is to say, what if you’d never, starting from first principles, built healthy conversations around 280-character bursts that are judged by the social reaction they create among an audience that’s consuming them in an environment that is, to say the least, not conducive to reflection?

Finally, you wrote a good piece last week about how a Michael Bloomberg victory in the Democratic primary would set a bad precedent, in part because it would represent the triumph of raw spending power over any other candidate attribute. (As of last week Bloomberg had spent $417 million, much of it on Facebook, Google, and Twitter ads.)

So far, it seems that Democratic voters are rejecting Bloomberg’s candidacy in large numbers. Assuming you’re right, and that Sanders wins the nomination, will that suggest that these days it’s better for a candidate to be polarizing than to be rich?

I don’t think there’s any doubt of that, actually. We saw it in 2016, too: Money matters, but if it was the only thing that mattered, Jeb Bush would be president now.

The Ratio

Today in news that could affect public perception of the big tech platforms.

Trending up: YouTube might start letting creators sell ad space on their own videos. Tom Leung, YouTube’s director of product management, said the feature is already being tested in “a very small pilot.” (Tubefilter)

Trending down: A researcher is raising questions about how TikTok’s recommendation algorithm suggests new creators to users. Specifically, their observations suggest the algorithm could sort suggestions based on the race of the creator.

Trending down: Twitch suspended multiple channels for hosting their own Democratic presidential debate coverage. The move came after the company received copyright claims that later turned out to be false. “We regret that a false notice from a 3rd party disrupted any of our streamers,” the company said.

Governing

⭐ An appellate court ruled that YouTube can moderate content without violating the First Amendment because it is not a public forum. The decision was part of a 2017 case regarding radio talk show host Dennis Prager, who sued Google for allegedly not giving his conservative PragerU videos the same treatment as liberal ones. This is great! Here’s Ashley Cullins at The Hollywood Reporter:

“Despite YouTube’s ubiquity and its role as a public-facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment,” writes McKeown, adding that both the First Amendment and Supreme Court precedent present “insurmountable barriers” to PragerU’s argument.

⭐ Tech giants teamed up to push back on draconian new internet regulations in Pakistan — and the Pakistani government actually backed off. Facebook, Google and Twitter could have faced severe penalties if the regulations went into effect. (Vindu Goel and Salman Masood / New York Times)

The standoff over Pakistan’s digital censorship law, which would give regulators the power to demand the takedown of a wide range of content, is the latest skirmish in an escalating global battle. Facebook, Google and other big tech companies, which have long made their own rules about what is allowed on their services, are increasingly tangling with national governments seeking to curtail internet content that they consider harmful, distasteful or simply a threat to their power.

The US Justice Department rebuked Google for failing to promptly turn over critical information to government officials who are investigating the company for possible antitrust violations. In a letter, the department called the delays “unacceptable.” (Tony Romm / The Washington Post)

Clearview AI’s facial recognition app has been used by the Justice Department, ICE, Macy’s, Walmart, And The NBA. The company’s leaked client list reveals it is working with more than 2,200 law enforcement agencies, companies, and individuals around the world. This report contradicts company statements that it only made its technology available to government agencies.(Ryan Mac, Caroline Haskins and Logan McDonald / BuzzFeed)

The Bloomberg campaign is putting out more memes on private Instagram accounts — a tactic that helps them grow their follower base and avoid scrutiny from journalists. It’s one of the many social media strategies the campaign has employed that have tested Facebook’s policies in recent weeks. (Taylor Lorenz and Sheera Frenkel / The New York Times)

Teens are forming political coalitions on TikTok to campaign for candidates, post news updates and fact check opponents. The trend shows how difficult it is for any social media platform to stay apolitical. (Taylor Lorenz / The New York Times)

A pro-Trump super PAC is currently running a misleading TV ad targeting Joe Biden. The ad weaponizes audio of former president Barack Obama against Biden, in what is clearly an effort to turn African American voters against him. (Greg Sargent / The Washington Post)

Facebook filed a lawsuit against OneAudience, a data analytics company that improperly accessed and collected user data from the social media platform. The company said OneAudience paid app developers to install a malicious software development kit in their apps to collect data. (Facebook)

Facebook paused its rollout of election reminders in Europe, after privacy regulators raised concerns. They asked Facebook to provide information about what data it collects from users who get the notification and whether it uses the data to target them with ads. (Natasha Lomas / TechCrunch)

Industry

⭐ Robots aren’t taking our jobs - they’re becoming our bosses. In warehouses, call centers, and other sectors, intelligent machines are managing humans, and they’re making work more stressful, grueling, and dangerous. Fantastic, wide-ranging investigation from The Verge’s Josh Dzieza:

These automated systems can detect inefficiencies that a human manager never would — a moment’s downtime between calls, a habit of lingering at the coffee machine after finishing a task, a new route that, if all goes perfectly, could get a few more packages delivered in a day. But for workers, what look like inefficiencies to an algorithm were their last reserves of respite and autonomy, and as these little breaks and minor freedoms get optimized out, their jobs are becoming more intense, stressful, and dangerous. Over the last several months, I’ve spoken with more than 20 workers in six countries. For many of them, their greatest fear isn’t that robots might come for their jobs: it’s that robots have already become their boss.

Facebook canceled its F8 developer conference due to coronavirus concerns. The company said it will replace the main F8 conference with “locally hosted events, videos and live-streamed content.” And now no one has to drive to San Jose!

In a podcast interview with NBC, Facebook COO Sheryl Sandberg defended Facebook’s data collection practices. “There is growing concern, which is based on a lack of understanding, that we are using people’s information in a bad way. We are selling it. We are giving it away. We are violating it. None of that’s true. We do not sell data,” she said. Listen to this one — I’ll likely have more to say about it next week. (Dylan Byers / NBC)

Facebook hired the World Economic Forum’s former head of tech policy, Zvika Krieger, as its new director of responsible innovation. The move is part of the company’s attempts to address ethical issues earlier in the design and engineering processes. (Ina Fried / Axios)

On TikTok, advertisers are seeing views for US hashtag campaigns in the billions. It’s a surprising metric given that TikTok has been downloaded only about 145 million times in the United States. (Sarah Frier and Kurt Wagner / Bloomberg)

TikTok is turning teens into celebrities, almost overnight. But what happens when that fame starts to fade? How is this already happening?! (Rebecca Jennings / Vox)

The CEO of Reddit called TikTok “fundamentally parasitic.” The app is “always listening,” he added. “The fingerprinting technology they use is truly terrifying, and I could not bring myself to install an app like that on my phone.” (Lucas Matney / TechCrunch)

The royal Instagram account shared by Prince Harry and Meghan Markle is perpetually lagging just behind the follower count of Prince William and and Kate Middletons’ shared account. The digital hierarchy has sparked a conspiracy theory about why Harry and Meghan can’t catch up. Incredible forensic data analysis here.(Caity Weaver / The New York Times)

YouTube appointed its first “creator liaison”, Matt Koval, to give creators more consistent information about changes that will affect their day-to-day lives. Koval is a former YouTube creator who joined the company full time as a lead content strategist in 2012. This honestly sounds like one of the most stressful jobs in the entire world! Imagine his email inbox and shudder! (Julia Alexander / The Verge)

A subreddit called r/wallstreetbets, with 900,000 users, is awash in tips and tactics that have an uncanny ability to push stock market prices, at least in the short term. Now, even veteran traders are paying attention. (Luke Kawa / Bloomberg)

LinkedIn is testing Snapchat-like stories. The company is calling the feature “a new conversational format” for business conversations. LinkedIn is just Facebook in slow motion, volume infinity. (Chaim Gartenberg / The Verge)

Jada Pinkett Smith has created a safe space for black celebs involved in scandals, in her Facebook Watch show, Red Table Talk. But the conversations, which sometimes stay surface level, also let people off the hook. (Michael Blackmon / BuzzFeed)

And finally...

Imagine waking up one day and learning on Instagram that your ex-boyfriend of seven years is dating Lady Gaga. This happened to Lindsay Crouse!

Maybe a decade ago I would have subscribed to US Weekly. Today there’s no need: I have the parade of people in my phone. I mix “real” celebrities with people I know and I can curate it all however I want. Then I scrolled through Instagram and saw a post from Lady Gaga: she was sitting in her new boyfriend’s lap. Friends from college liked it — along with nearly three million others.

Crouse seems to be taking only the best, most inspirational lessons from this entire state of affairs. Read it!

Talk to us

Send us tips, comments, questions, and your most polarizing comments: casey.com and zoe.com.

Source: The Verge

Powered by NewsAPI.org