Who Is to Blame for Fake News?

In an interview with the Post, author and media scholar Marcus Gilroy-Ware talks about simultaneous crises in journalism and democracy.

screaming mouth illustration imposed on stacks of papers

Weekly Newsletter

The best of The Saturday Evening Post in your inbox!

SUPPORT THE POST

Ever since the whole world went online, we’ve faced an onslaught of news-seeming websites peddling in fiction. False claims and made-up stories spread quickly on social media, and the truth is more and more difficult to find. Can you even trust mainstream news sources?

There are plenty of theories for why fake news and political paranoia proliferate in the internet age, but writer and media scholar Marcus Gilroy-Ware says they often miss the mark. His recent book, After the Fact? The Truth About Fake News, looks for the source of the current crises in journalism and democracy.

After writing and lecturing about social media, Gilroy-Ware found that many common explanations for rampant disinformation — like a lack of social media oversight or natural polarization — aren’t taking the scope of the problem into account. In our interview, he described what his research shows about a suspicious and politically alienated population.

The Saturday Evening Post: If we’re going to talk about “fake news,” these words “misinformation” and “disinformation” tend to come up a lot. What is the difference?

Marcus Gilroy-Ware: The main difference between misinformation and disinformation is that disinformation is when you are being deliberately misled, for any reason, while misinformation is not deliberate. Crucially, misinformation is not something that can be as easily identified even if there’s much more of it. In fact, we can be misinformed in ways that actually make us feel very informed.

SEP: What are some examples of either?

MGW: As I say, misinformation is a far broader concept, so examples of misinformation are abundant. For example, the numbers of people in the U.K. who are Muslim. When you ask the population, they would estimate it’s somewhere around twenty-four percent when actually it’s about five percent. So, this is an example of misinformation. It’s a commonly-held misunderstanding of the world. These can be misunderstandings, erroneous beliefs, simplifications, false dichotomies, and so forth. I argue that these are all types of misinformation.

As for disinformation, Donald Trump’s term is about to come to an end, but when he was sworn in in January 2017, there was a kind of tug-of-war around how many people attended the inauguration. The incoming administration was claiming that it was the most-attended inauguration ever, which wasn’t true and they knew it. Of course, we should always be cautious about accusing anybody of deliberately misleading the population, but the pictures showed that it was not true and they kept saying it.

But the line between misinformation and disinformation is porous: if, having been lied to in some way (which is disinformation), you believe the lie and carry on passing the lie on because you believe it to be true, that same information now becomes misinformation. In the case of the inauguration, many people who voted for Trump really liked the idea that that was true, so they could believe it in ways that one could argue it was misinformation instead of disinformation. So, the two things have a rather complicated relationship with one another.

SEP: I remember, around 2016 or so, noticing the term “fake news” becoming mainstream. Then it started being used by the president to describe big news companies, and that sort of warped our understanding of it. Reading your book, you seem to be making the case that it’s hard to understand any of that without taking into account a longer view of media and government.

MGW: What is ironic to me is that there is misinformation in the ways that we’ve tried to deal with the phenomena of misinformation and disinformation. The words “fake news” correspond to different identifiable paths that are taking place that can bleed into one another but are actually separable. The tendency of certain kinds of websites to spring up and start spouting the kind of content that can be spread around even if it’s untrue — and maybe was deliberately created as a hoax — that’s where the wording of “fake news” comes from. Then we have leaders across the world — not just in the United States — using the phrase in order to avoid accountability when the media do try to hold them to account.

And they’re only able to do that because of the known existence of these kinds of websites and the hysteria around them, which is another form of misinformation, and because journalism itself, as an industry and a set of practices, is something that has been experiencing a kind of crisis, both economically and as far as its credibility. There are a lot of really great journalists out there doing really great work, but it is easy to find examples of bad journalism.

So, you’ve got all of these problems in what I call the “information ecosystem” that don’t necessarily share their origins, but they do exacerbate one another. In the book, I write that “fake news is fake news.” The two words kind of obscure and confuse the problems of misinformation and disinformation more than they allow us to understand them properly.

SEP: You write about “illiteracy” in both a literal sense and more broadly relating to our understanding of systems like capitalism and government. How would you say this contributes to a fake news crisis?

MGW: I think the ideas of literacy and numeracy are really important for equipping people to deal with a complex world, but the main point I wanted to make was about a broader form of literacy, within which the literal problems of illiteracy and innumeracy make things worse. You can have a kind of static knowledge about the world, or you can have a systemic knowledge that really enables you to understand the world as it turns — an understanding of how systems such as power, media or the natural environment actually work. Democracy relies on us having an informed and literate population in order to make good decisions about who to elect, particularly when we face crises that affect all of our well-being, and what I wanted to write about was the breakdown of that system.

One of my great frustrations is seeing people who are very exercised about aspects of a system but don’t actually have a deep understanding of the workings of that system. As much as I would like to say otherwise, I see that that is relatively consistent across the political spectrum. It’s worse in some parts than others, but generally there isn’t a “side” that doesn’t have some element of that.

Chiefly, I think the issue is in people not being literate in understanding what power actually looks like. In the capitalist portions of the world where we believe we have these freedoms — freedom of speech, right to bear arms, religion, etc. power is all around us, and it’s exercised in ways we don’t understand, which is greatly disempowering. Our democracies have been sort of hollowed out — both in the U.S. and in Europe — and we’re very quick to fault our elected officials and hold them to account for what’s going wrong, but we’re very bad at realizing that their power is very limited because of the way the corporate world has taken over that power.

It’s important that if you want to solve a problem you have to understand what the cause is. The virus has shown us that we really don’t understand public health all that well. Before 2020, we didn’t really know much about airborne viruses and how they spread. Those who did had a major advantage. This is another example of this kind of knowledge about the world — how things work, not just what they are. Everyone is illiterate about something, myself included, but we all need to be as literate as possible in response to the problems we collectively face.

SEP: People in recent years, I think, have increasingly talked about this Dunning-Kruger effect, which you mention briefly in your book. Could you explain what this is and how it enters into the problems we’re talking about?

MGW: The Dunning-Kruger effect is the idea that we’re basically unaware of our own limitations, cognitively and intellectually speaking. The most intelligent people I know are usually the quickest to express humility and self-doubt about how much else they don’t know. Meanwhile, a lot of the loud, ignorant voices that we hear in the world are convinced that they do have all the answers; they don’t have the capability of recognizing their own limitations. This is a summary of how that unawareness of our own limitations affects political discourse. The more you know, the more you know you don’t know. People who don’t know very much tend to think they know quite a lot. It’s contrary to what we would expect, because we tend to think of knowledge as something that’s cumulative.

SEP: Conspiracy theories have taken up a lot of air this year in national news, and you write about “knowledge gaps” or “loud unknowns” that allow them to proliferate. Can you explain your understanding of these knowledge gaps and how they occur and how they’re exploited?

MGW: The concept of the “loud unknown” comes from the work of the sociologist Linsey McGoey. Loud unknowns are, I think, a concept that would be intuitively familiar with a lot of people. Unknowns can come from anywhere, but with the din of the mediascape and the information ecosystem, what makes a “loud” unknown is when a key piece of information — an important and highly relevant fact that the populations need in order to reach the correct conclusions — is conspicuously absent despite that overall abundance of information. Governments will make very calculated statements telling the population what they think they need to know. Corporations will do the same. So, a lot of things remain unsaid, and, to a certain extent, the public is sensitive to this.

But unknowns or “gaps” can also arise because of basic ignorance. When it came to the pandemic, there was no reason to doubt the idea of an airborne virus making people ill, because what we know about viruses entirely supports that. But if you haven’t got that knowledge, the idea that you are supposed to stay at home, forgo certain activities, maybe lose your job, and all of this coming from the government, would be highly suspicious. The problem there was that the scientific knowledge to fill that gap was frequently coming from the very same sources that were also communicating those limitations of which people were suspicious in the first place, so it had no effect, because the source is very important.

Unknowns can come from anywhere, but the “loud” part is that they are particularly conspicuous.

SEP: Maybe like the Jeffrey Epstein case?

MGW: That’s another good example. It appeared that there was going to be a trial that would have brought to light a lot of unsavory information in the form of testimony and evidence, and then Epstein died in a jail cell in an apparent suicide, making the impending trial and associated revelations unnecessary. So people are wondering, how did this happen? Was he silenced?

There’s a lot that we don’t know about that, and that’s where suspicions come in to fill in the gaps, and people create or adopt an alternative narrative that enables them to feel they have the “real” story. This is where our alienation from power comes in. We suspect that power is exercised by limiting and restricting information. And it is! But frequently we fill in the wrong gaps, or fill them in with the wrong thing, because we have a suspicion of power already.

One of the concepts I use in my recent book is “knowledge agency,” which is the idea that people fill in these “gaps” in some attempt to take back their power. Even if it is a totally unrealistic reality, the fact that they feel more empowered is what is key. It’s a complex but interesting phenomenon in the way that these unknowns combine with the power of imagination and suspicion. We know that the exercise of power is hypocritical. We know that it is corrupt, deceitful, and that’s not a left-right issue. Everybody is suspicious of somebody in power.

SEP: Would you say these loud unknowns are more pronounced or more prevalent in recent years?

MGW: No. I think it’s the suspicion that has really increased. Also, while I’m very careful not to blame technology for these things, I do think the dynamics of user-generated content in the last ten to fifteen years have enabled a kind of amplifying effect. If I was a conspiracy theorist before, I could maybe go to my local bar and talk to some people there about what’s really going on, and maybe I would convince three people if I’m good. But now, I can go online, to the back corners of Facebook, and I can expose maybe a hundred people to the same ideas. One hundred people who are also suspicious, who also have knowledge gaps about what’s going on and are looking for answers.

We’re very good at talking about the left and the right being polarized, which is true, but what we haven’t figured out is that there’s a real polarity in terms of how suspicious are you? I wrote a chapter on conspiracy theorists who I consider to be too suspicious, but the chapter following that is on those people who are too invested in the status quo, and who are basically not suspicious enough of the systems we have. In both cases these are examples of misinformation and of a sort of failure in literacy in terms of who to be suspicious of, why, and when.

All of this is really about the systems that we live by, and how much do you want to change them or keep them, and these divisions exist within parties as well as between them. For example, those Democrats who protested Trump by saying “If Hillary had won we’d be at brunch right now” are not the same Democrats who are campaigning to organize Amazon workers for better treatment and benefits and pay. The same if you look at free-market Republicans and nationalist Republicans.

SEP: What sort of a world do these conspiracy theories and fake news sites present to people, in terms of power, good and evil, and other big ideas like that?

MGW: Well, I think they present a very Manichean world that’s like, “the world is mostly evil, but here are some good people, and you can be part of the good side.” This understanding of the world is very common amongst conspiracy theorists, but also a general feature of an oversimplified media desperate for headlines. But beyond that, I don’t know that there’s one type of world that all conspiracy theories and fake news sites present. The point of us talking about conspiracy theories is more the idea that disinformation has its origin in the structures of power. I would say the governing feature of conspiracism is an overabundance of suspicion, so the world that presents is one in which basically nobody can be trusted.

That’s reflected culturally in the current moment. Conspiracy theories were mostly about aliens when politics was a lot less fractured. Now, distrust is inflected through different aspects of the political and cultural environment. Even if you want to imagine a sort of gigantic conspiracy of pedophiles and liberals, and also with some unsavory antisemitic elements, a lot of it comes back to the political grievances and alienation that people have from knowing deep down that the country is run by and for a relatively small elite in an economic sense, and the way that this is being held up as a democratic system that we should embrace, when it isn’t! It’s a reaction against that. So it isn’t really necessarily about any one type of world that is presented; it’s about the position that people are in and the alienation they feel.

SEP: “Echo chambers” and “filter bubbles” come up a lot in discussions of technology and fake news. Can you explain what you find them responsible for or not, and how this differs from other people who attempt to explain such things? Because you seem to have a somewhat divergent view of this.

MGW: Both concepts relate to the problem of exposure. What types of political ideas and debate are we exposed to, and how does this affect shifts in our own opinion? The idea of an echo chamber is essentially that the same views are aired continually in a social or discursive context. Particularly, we talk about them as what we experience online, using social media. So, the charge is that if you are on Facebook, let’s say, you are going to be in an echo chamber because all of your friends and contacts are apparently likely to have the same political views as you, and therefore sharing links that support the same political positions you already hold. The idea of a filter bubble is similar, but it’s more oriented around the idea that the technology of those platforms only selects and exposes you to specific viewpoints.

I have a divergent view of this in several ways. The view is not mine; it’s based on the research of people who study this. In particular, an academic called Axel Bruns. A number of positions and papers I’ve found — actually pretty much everybody who studies this — says these problems are overblown. One of the mistakes people make — and this is called a technologically determinant argument — is to say that this issue of exposure is substantially worse with social media than other forms of human social interaction. So, when you’re making your way through in the world in general, you’re said to be more likely to encounter a variation of political viewpoints, but somehow when you go online, suddenly you’re not going to have that exposure to different viewpoints. Therefore, you can supposedly become more extreme in your own views from being exposed to the same things again and again.

The problem is, there is really no evidence to suggest this is the case. In fact, there’s some evidence to suggest you’re exposed to a greater degree of political variation when you are using social media platforms. Facebook is the most studied one. When you go to Facebook after you haven’t been on for a couple of days, people have posted links and your timeline reads a certain way, is anyone really saying that everything in there is Republican or that everything in there is left, democratic socialist, whatever? It’s not my experience, and it’s not the experience that most people have.

The other side of this is that if there is an echo chamber element to the way we talk about politics, that is something that is surely much bigger than our use of technology. Before the pandemic, you would do things like going to the bar to meet with friends. We tend not to have friends that have wildly divergent views beyond the point that we would find their views problematic. So, actually this is something that’s an ordinary part of human social life. All media are social, and when we call these platforms “social media,” we forget that reading books is a social process, gaming is a social process, watching the news is a social process. These things fulfill a role in our lives that is social even when they’re not called “social media,” and so the questions of exposure and discord are also much bigger than just using online platforms.

Also, we often don’t go online looking for political content or discussion, and next to all of the other types of things we’re likely to find on social media, why is a political story or disagreement the thing we’re most likely to focus on? Most people are more likely to talk to family members and close friends that they already trust about political issues. Then we have the influence of other things like satellite or cable TV which are much more likely to influence people’s views than social media, especially over time. So, the whole idea that we should suddenly wring our hands about the effects of filter bubbles online — and the polarization they cause — is a form of misinformation in itself. And it doesn’t help us solve the real problems we face.

SEP: Why is that idea so prevalent, that echo chambers and filter bubbles are really what’s at work here?

MGW: Because intuitively it makes sense, even if it’s empirically wrong. It’s an easy idea to understand, even if it doesn’t bear out. A lot of misinformation is very tempting and feels right, even if it isn’t. A more general version of this problem is also that we so often want to blame technology for problems that are not technological. It’s convenient and quick and saves us from having to contemplate difficult political issues. There’s this problem of fake news, so “let’s change the technology to make this go away.” The whole point I’ve been trying to make is that these are political problems that were not caused by technology so you cannot fix them with technology.

When I wrote about social media in general in my last book, and the patterns of compulsive social media behaviors — four in ten Americans admit to checking social media on a smartphone behind the wheel of a car — you have a big problem there that seems technological. So the apparent solution is to blame the technology. Technology companies are culpable for a lot, and Google and Facebook mercilessly exploit and exacerbate these problems, but blaming the technology itself is a way of not having to solve the problem properly. If fake news has its origins in political alienation and the hollowing-out of our democracy by corporations, but we insist that we can solve fake news by adding more features to Facebook timeline and disinformation resources, then we can have something that feels like a solution but isn’t one, and this is more dangerous than having no solution at all.

Featured image: jumpingsack on Shutterstock/ Reagan Freeman on Unsplash

Become a Saturday Evening Post member and enjoy unlimited access. Subscribe now

Comments

  1. These are some interesting insights into a complex problem. Although the dissection of the problem is well done, the author’s choice of examples reveal to the audience his bias; subtly displayed to the reader.

  2. To use trump ‘s ” false claim” of crowd size is ludicrous …you could find much better examples of misinformation than that….for disinformation best source would be the liberal so called journalism news source such as CNN, and most others #1 topic for four long years of unfounded RUSSIA COLUSION by corrupt LEFT WING LOONS.

Reply

Your email address will not be published. Required fields are marked *