What Does a Fact-Check-Free Facebook Mean for Trump’s America?
Priscilla Chan, Meta founder Mark Zuckerberg, Lauren Sanchez, Amazon founder Jeff Bezos, Google CEO Sundar Pichai, and Tesla CEO Elon Musk attend Trump's swearing-in ceremony on Monday. (Photo by Chip Somodevilla/Getty Images)

This year’s Inauguration Day seating chart crammed a whole lot of Silicon Valley into Washington: Meta CEO Mark Zuckerberg, Amazon founder Jeff Bezos, Google CEO Sundar Pichai, and X owner Elon Musk planted themselves in the Capitol’s Rotunda to watch President Donald Trump take his second 0ath of office.

Earlier this month, Zuckerberg announced that Meta, which owns social media giants Facebook and Instagram, will end its use of third-party fact-checkers to moderate content on its sites. Instead, Meta platforms will gradually shift to a “community notes” system inspired by one Musk implemented on X: Designated contributors can leave notes on posts to flag factual errors, and if enough contributors “from different points of view” deem the note helpful, it will be publicly attached to the post. The company’s stated reasons for the move of the moveit said the relaxed restrictions would reduce political bias and promote free speech—paralleled the language Trump used to criticize Facebook’s fact-checking procedures during his first term.

What does any of this mean for the American scroller? Washingtonian spoke with Gabe Robins, a computer science professor at the University of Virginia who specializes in algorithms. (Last month, he walked us through the mystical TikTok ban, which continues to swing in the balance.)

Washingtonian: More than half of US adults say they regularly get news from social media, according to a Pew Research Center poll. Facebook and Instagram are leading sources. What does Meta’s decision to stop fact-checking content on its platforms mean for our information landscape?

Gabe Robins: Most people easily believe what they see on these social media platforms, which is sometimes alarming, given all the conspiracy theories and all the misinformation that’s rampant—especially with AI tools. So the line between entertainment and news is very blurry these days, and it’s getting even more blurry.

Let’s talk about the underlying principle of free speech. Most people have misconceptions about what free speech means. The right to free speech protects you from persecution and imprisonment because of what you said. Free speech protects you from jail—not from consequences in general. The bottom line is that just because you have the right to say something doesn’t make it a good idea to actually say it. And even free speech itself has limits. Death threats are not free speech. Hate speech is not free speech. Incitement of violence is not free speech.

So here we can segue into fact-checking. Platforms such as Facebook and Instagram are full of information of all kinds, including legitimate news stories, but also conspiracy theories and outright lies and even hate speech. Should platforms such as Facebook and Instagram be fact-checked? And it’s a nuanced question. The answer is not necessarily yes or no. It depends on how these platforms build themselves out to the public, and how people perceive these platforms. For example, people who perceive social media as mostly entertainment—like they read a novel or see a movie—for them, fact-checking is not particularly critical, because they don’t necessarily believe everything they see. But people who go to these platforms for information and world news—they’re much more vulnerable. And there’s a lot of these people.

So if there was no fact-checking at all, that can easily cause the platform to devolve into so much misinformation that it’d be almost pointless to go there for any kind of news. But the company can save money on the fact-checkers, so part of what Facebook is probably trying to do is eliminate thousands of jobs. But on the other hand, even that’s not so simple. Will they really save money by limiting fact-checkers? Because if you eliminate all fact-checking, the platform can quickly devolve into chaos and you can have huge legal liabilities. Also, a lot of advertisers wouldn’t want to run their ads on platforms that include hate speech, death threats, and all sorts of other unpleasant discussions.

And so it’s not clear, if you eliminate all fact-checking, whether that will be financially beneficial to the company or financially hurtful to the company. Excessive fact-checking can also cross a line into censorship. It’s one of the axioms of democracy that you want free speech and you don’t want censorship. Bottom line is, Facebook is preparing to run a very interesting social experiment here.

If we’re looking at this as a social experiment, then Zuckerburg isn’t even the pioneer—he cited X as a model for the new fact-checking procedures that Meta will use. Have experts learned anything from how X’s community notes system works?

All this being is being looked at very carefully by sociologists and psychologists and political analysts and cultural scientists. So the jury’s still out. I think about how effective each particular system is, and you have to start comparing it to alternative ways of fact-checking or not fact-checking. So I think it could be a while before we have a generally clear understanding about what works best. In the social media universe, we’re still in the Wild West. We’re exploring a lot of things. In five or ten years, we’ll have a much better idea of what works and what doesn’t work. But right now, I think it’s too early to tell.

Zuckerberg cited political bias, which we touched on a little earlier, among the third-party fact-checkers that Meta was using as the chief reason for overhauling that system. Given the way that Meta’s algorithms work and how that information is presented to users based on what they engage with, do you expect a community notes procedure to limit political bias at all?

Generally, I would say a cautious yes. But the reason I’m not 100 percent optimistic is because generally, free speech is very important in cases where you don’t like what you hear. People who are in agreement with you, who just echo your own thoughts and beliefs—of course you’ll never accuse them of fake news.

This kind of fact-checking by consensus I think generally is good, but it may not always resolve it one way or the other. And I’ll give a relatively recent example:  The January 6 events of 2021. About half the country believes it was a violent insurrection. And many people indeed were hurt, and a few people died. The other half  of the country believes it was a “day of love.”  We had four years to hash and rehash these events, and we still can’t come to an agreement.

I think the long-term solution—I’m talking about a generation or two from now—is we need to educate people better. If more people had a chance to have an education for relatively low cost or even free, we’d have a public that is a lot harder to fool. Some people don’t have this instinct. And then they not only believe it, but immediately forward it to all their friends and family.

Are there things that we can do now?

Sure. We talked about the up-votes and down-votes of users’ particular posts—that can definitely help. But for certain things, it will not put a dent in it. And so it’s kind of a catch-22 for executives, like those at Facebook. What should they do? The dust won’t settle on that for a long time, I think, until sociologists and psychologists and political scientists study all these things. And educating the public takes even longer. You’re gonna have to educate your children so that they can educate their children better.

Tech titans like Zuckerberg, Musk, and Bezos are publicly expressing support for the Trump administration, and these changes at Meta were announced in the direct run-up to his inauguration. What do you think the implications of that shift in dynamic are for people who consume these tech products and use these social media platforms?

One of the fundamental issues with watching over others—whether it’s fact-checkers or the government watching over us, making sure we’re not breaking laws and so forth—the question becomes, who watches the watchers? And there’s no simple answer for that. It’s all a game of checks and balances. If one branch of government is overwhelmingly strong and the others are weak or nonexistent, you basically have a monarchy. Whenever any one group or one individual especially had too much power, it ended up in not just abuse, but tremendous, horrific abuse, to the point of wars and genocides.

And so it’s a bit alarming to see that certain billionaires or people with great influence begin to take more extreme positions and begin to impose themselves more on others. Social media platforms—they exert a continuous, subtle, and sometimes not so subtle, relentless influence on all of us, especially children and teenagers. And if it’s continued left to its own devices, unchecked, tied to corporate greed, it could lead to some negative consequences. To put it mildly.

So yeah, I share the concern of many about that, and I’m not sure how it’s all going to pan out. I’m hopeful that, given our system of checks and balances, everything will eventually equalize and stabilize on some reasonable outcomes. I sometimes remind myself that despite alarming news stories, our Constitution has been around for 250 years, and it’s pretty much intact.

The post What Does a Fact-Check-Free Facebook Mean for Trump’s America? first appeared on Washingtonian.

Source: View source