Information overload and paralysis
When there is too much to know, one ends up not deciding at all.
George Miller formulated it in 1956: the human brain can process roughly seven items of information at once. Seventy years later, the flow of information to which we are exposed each day is counted in the thousands.
This imbalance produces precise cognitive consequences. Barry Schwartz calls it "choice paralysis": when the number of pieces of information passes a threshold, decision becomes difficult, then impossible. A study in the Journal of Consumer Research (2020) established that individuals exposed to an excess of information make poorer decisions than those who are given a limited but relevant corpus.
In schools, the logic of virality as a proxy for truth is the direct product of information overload. What the overwhelmed student lacks is a frame for evaluating reliability.
"Too much information is like too much light: in the end, one sees nothing."
In the same spirit: Information bubbles · Digital Anomie
↑ Contents
Information bubbles
The algorithm does not show us the world. It shows us our world.
In 2011, Eli Pariser published The Filter Bubble and formulated a hypothesis that would run through the decade: personalisation algorithms build around us invisible bubbles in which only information aligned with our pre-existing beliefs circulates.
TikTok can identify the centres of interest of a new user in less than ninety minutes of browsing. The algorithm knows, before you do, which content will prolong your attention.
A large-scale study in Science (Gonzalez-Bailon et al., 2023) analysed 208 million Facebook users during the 2020 U.S. election. It confirmed massive ideological segregation while showing that algorithms are not the only ones responsible: individual network choices also play a predominant role.
"The algorithm does not lie. It chooses. And that is the problem."
In the same spirit: Information overload and paralysis · Algorithmic conspiracism
↑ Contents
Fake news and disinformation
The lie travels six times faster than the truth. This is not a metaphor.
It is a measured fact. In 2018, an MIT team published in Science a study analysing 126,000 rumour cascades on Twitter between 2006 and 2017 (Vosoughi, Roy & Aral). Result: falsehood spreads farther, faster and deeper than truth. Not because of bots, but because humans are attracted by novelty and by the emotion of surprise or disgust.
Fake news do not spread despite being false. They spread because of it. The improbable, the scandalous, the indignant circulate better than the nuanced.
The answer is not censorship. It is education in the mechanisms of disinformation: understanding why we share before understanding what we share.
"Verifying before sharing is not a question of morality. It is a question of cognitive mechanics."
In the same spirit: Information bubbles · Algorithmic conspiracism
↑ Contents
Algorithmic conspiracism
The algorithm does not radicalise. It recommends. But the border is thin.
Guillaume Chaslot, a former YouTube engineer, was one of the first to sound the alarm, in 2018-2019, about the functioning of the recommendation algorithm. His observation: to maximise attention time, the algorithm tends to propose increasingly extreme, emotionally charged content, because such content holds attention better.
A study by Ribeiro et al. (2020), published by the ACM under the title Auditing Radicalization Pathways on YouTube, analysed 72 million comments and showed how users gradually migrate from moderate content towards extreme content through recommendations, what the researchers call the "radicalisation pipeline".
"The algorithm has no opinion. But it knows how to keep your attention."
In the same spirit: Fake news and disinformation · Yes-man attitude
↑ Contents
Deepfakes and trust
When the image stops being proof, what takes its place?
In 2017, the term deepfake did not exist in ordinary vocabulary. By 2025, it had entered every language. These synthetic videos were first a tool of entertainment, then a weapon.
The sociologist Thomas Rid calls the resulting phenomenon the "liar's dividend": once visual evidence can be fabricated, it becomes possible to deny any real proof by claiming it has been fabricated. A MIT Media Lab study (2024) established that people exposed to deepfakes develop greater mistrust towards authentic content.
"Must we still believe what we see?"
In the same spirit: Fake news and disinformation · Digital surveillance
↑ Contents
To situate the approach: A Digital Ethic.