A close-up of a tablet Description automatically generated with medium confidence

Post-truth: how did we get here?

Did you know that Cleopatra lived closer in time to the construction of the Bass Pro Shops pyramid in Tennessee than that of the Giza pyramids? That if you break the terms of service of your Samsung smart fridge, they are within their rights to unilaterally remotely terminate your appliance and by extension, rot all the food inside? That the Dave Matthews Band dumped 800 pounds of human waste from a bridge onto a passing passenger boat? Actually, that second fact is completely made up. Maybe it’s true, but I’m not about to read the terms of use for a fridge I don’t own to find out. But doesn’t it feel like something a company would do? And isn’t that feeling what actually matters?

In the past few years, there has been a notable rise in the use of the phrase `Age of Disinformation` to describe the current state of the world. Indeed, though we now have more knowledge at our fingertips than ever before in the history of humanity, the intrinsic truthfulness of that information is not yet a guarantee. Our most exhaustive collaborative library, and arguably one of our species’ greatest achievements, the Internet, is rife with falsehoods and half-truths presented as facts – at first glance, a natural consequence of its openness and of the subjectiveness of ‘truth’ itself. As so-called digital natives, you’d think that our generation would have grown up ready to sift through the ocean of information and separate the truth from the drivel; yet there are countless examples of young people falling prey to mob behavior and mass misinformation campaigns online (remember the Wayfair conspiracy, or West Elm Caleb?) These events are just the most salient examples of phenomena which have existed for as long as humans have socialized with one another, but it seems as though disinformation is nowadays more intentional, more pernicious, and harder to fight against. In fact, I would argue that many factors in recent years have contributed to the birth of the post-truth era we currently live in, be they social, political, or technological.

The term disinformation itself comes from the Russian term дезинформация, which is transliterated as dezinformatsiya. The word comes to us from the early days of the Soviet Union, where the tactic was central to the government’s intelligence operations. By weaponizing misinformation (which is unintentionally false information), one can forge their own narrative and give it non-negligible credence. But the Soviets were far from the first, or the only ones, to use lies or mistruths to jab at their enemies. From the interpersonal phenomenon of gossip, to the larger phenomenon of intergroup prejudice, falsehoods have a strong presence in our social fabric, and so it is only natural that they could be used with more targeted intention in politics. Likewise, the increased use of disinformation comes as no surprise in a world where information is more readily available, and the electorate is better equipped to inform themselves.

In our lifetimes (though perhaps many of us are too young to remember), the existence of Weapons of Mass Destruction in Iraq remains one of the biggest political fabrications, created and propagated to satisfy the interests of the United States. In the wake of 9/11, which many Americans remember as a bloody attack on their soil without remembering the specific extremist ties of its perpetrators, invading Iraq seemed justified if it was explained by the removal of “a regime that […] harbored and supported terrorists, committed outrageous human rights abuses and defied the just demands of the United Nations and the world[1]”, as the government claimed it would do. In fact, support for the invasion was at 72% in March 2003, as the invasion began. Today, despite the war accomplishing virtually nothing in the realm of violence de-escalation or the “spread of democracy”, about 45% of Americans still believe the initial invasion was justified[2]. This specific example illustrates what I believe to be the at the founding of the post-truth era: so long as it feeds into our cognitive biases and preconceived notions, we are willing to accept any mis- or disinformation that is presented to us. My personal belief that there are inherently evil refrigerators out in the world, motivated by the self-interested policies of many companies, is entirely trivial; the weaponized belief that another country halfway across the world presents an immediate threat to a nation’s domestic security can launch more than a decade of instability and atrocities.

As we’ve established, disinformation has been and always will be part of our society. The recent explosion of disinformation in the mid- to late 2010s can be explained, in my opinion, by the media through which we encounter it. Much ink has been spilled over the attention economy, which incentivizes online platforms to retain our attention for as long as possible to maximize profit. This naturally leads to content-driven platforms devising personalized recommendation algorithms which ensure (with varying degrees of success) that the content that is being displayed is interesting enough for you to want to keep consuming it. In this way, our online personas are guided towards so-called echo chambers, where we are exposed to content that we enjoy or morally agree with. Most people are somewhat aware of this fact, as they are the ones who have chosen which accounts to follow or subscribe to and which content to interact with. However, the work of recommender systems is more discreet than bombarding a person with content they might like; as soon as one ventures outside of their bubble, it is still in the platform’s interest to keep them engaged. Suggested accounts and posts, along with general search results, are still filtered and sorted such that a person is kept within their bubble for as long as possible. The combination of recommender systems and the sheer amount of online content that exists means that it can often feel like this tailored “corner of the Internet” is representative of the population at large, instead of a small, skewed sample. This, in turn, means that it is easy to trust the information we are presented with without thinking about it critically. Either it was posted by a trusted peer, or it seems to align with a consensus within a specific content niche. Trust is a powerful communication tool, as it can both make us accept information as is or make us question information we would otherwise think to be axiomatic.

A person holding a phoneDescription automatically generated with low confidence

The last point I would like to draw your attention to is the breakdown of the scientific process over the past few years. Most of us were taught in school that science follows a rigorous process of hypothesizing, testing, and reproducing test results to confirm that they are indeed consequences of our hypothesis. Likewise, researching information means finding a source, ensuring it is a trustworthy one, and then finding more credible sources which arrive at the same conclusion. This can be a lengthy and tedious process, and it is much easier to simply defer judgement to the first source we feel like we can trust. It is also easier (and sometimes more aligned with our interests) to accept the first source we can find that confirms our preconceived idea. This online confirmation bias is today’s most powerful tool of persuasion. Indeed, when our most accessible sources are designed to show us content we already agree with, and we feel like the information we find is agreed upon by a large group, it is easy to simply accept it and move on. A salient example of this exists in recent memory: at the height of the COVID-19 pandemic in North America, anti-restriction and anti-vaccine protesters were screeching about “independent” research which had led them to believe that the vaccines were dangerous, or that the pandemic was a hoax. Despite people pointing out the null scientific value of their material, protesters were convinced that their conclusions were correct. On one hand, their confirmation bias was telling them that the misinformation which matched their strongest feelings was credible; on the other hand, the size of their community was such that content was being generated and distributed at lightning speed, giving them a multiplicity of fake and untrustworthy sources. Furthermore, platforms are themselves conflicted between combating mis- and disinformation, and their engagement metrics which benefit greatly from their spread. Though some platforms like Twitter and Facebook have in recent years rolled out features which allow them to point out or delete misinformation from their content, others like Spotify have reacted differently to controversy for spreading false information. The Joe Rogan Experience podcast, which is one of the biggest podcasts in the world, has produced several episodes on “COVID-19 skepticism” and anti-vaccine ideas; when backlash started, Spotify decided to defer the decision to remove episodes to Joe Rogan himself. Twitter also notably let Donald Trump post a multitude of inflammatory and disinformation posts as a “public figure”, before he even entered the Oval Office. The delayed response from both media and platforms to strongly condemn false statements has let public figures spin the most dangerous narrative of them all: that there can be contradictory opinions on empirical facts, that all truth is entirely subjective, and that research should only serve to strengthen one’s opinion instead of informing it.

Our current era of post-truth has been brought about by many phenomena adapting to our online existences and harnessing the speed and vastness of the Internet to amplify themselves. The increased polarization of online content, as it attempts to please more and more divergent groups more specifically, has served to reinforce most people’s pre-existing beliefs, and this has been hijacked by many groups to further their social or political goals. Will this era ever end? I’m not sure. As technology grows more sophisticated and fake content starts seeming more real, it will be much harder to fight against our cognitive biases to get to the truth of matters at hand. In the absence of a concerted effort by platforms to fight misinformation, the most we can do is ensure that we think most critically about the information we are exposed to online.

Kepler Warrington-Arroyo
Cliquez sur la photo pour plus d’articles !

Dans la même thématique, la rédaction vous propose les articles suivants :

HOLD-UP : LES YOUTUBEURS CONTRE LA DÉSINFORMATION
– Armin Azarmehr –

LES POPULISTES D’EXTRÊME-DROITE, OU LE MONOPOLE DE L’INFORMATION
– Deborah Intelisano –

INFORMATION WARFARE I – CAMBRIDGE ANALYTICA
– Yasmine Starein –

Sources:

http://2001-2009.state.gov/documents/organization/24172.pdf https://news.gallup.com/poll/1633/iraq.aspx