Digital surveillance
They do not read your messages. They do not need to. Your behaviours are enough.
In 2019, Shoshana Zuboff published The Age of Surveillance Capitalism and forged a concept: an economic regime founded no longer on the sale of goods, but on the capture and sale of human behavioural data for purposes of prediction.
Google, Meta and Amazon are not technological companies financed by advertising. They are surveillance companies financing their services through the sale of behavioural predictions. Every click, every pause on an image, every trip is recorded, aggregated, modelled.
"You are not being watched because you are suspect. You are being watched because you are profitable."
In the same spirit: Digital sovereignty · A Digital Ethic
↑ Contents
Social scoring and rating
China built the laboratory. The instruments, however, exist everywhere.
The Chinese social credit system has become, in the Western imagination, the dystopian archetype of digital control. The reality is more nuanced, and the system's logics are also found elsewhere.
In our liberal democracies, digital rating mechanisms proliferate. Credit scores, Uber ratings, Airbnb reviews: all are forms of algorithmic rating that affect access to resources. Frank Pasquale speaks of "digital fiefdoms" where decisions with concrete consequences are taken by systems whose criteria escape all democratic control.
"A society that rates its citizens in real time does not punish them. It sorts them."
In the same spirit: Algorithmic discrimination · Digital surveillance
↑ Contents
Algorithmic discrimination
An algorithm can be racist without anyone having programmed it to be so.
In 2016, ProPublica examined COMPAS, a tool used in several U.S. states to assess the risk of reoffending. Result: the system underestimated the risk of white defendants reoffending and overestimated it for Black defendants. No one had programmed racism. The algorithm had learned to discriminate because the data reflected decades of discrimination.
The researcher Joy Buolamwini showed that commercial facial-recognition systems had an error rate of 35% for dark-skinned women, against 1% for light-skinned men.
"An algorithm without conscience can do more damage than a malicious human being: it discriminates without fatigue."
In the same spirit: AI and justice · Social scoring and rating
↑ Contents
AI and justice
To entrust a judicial decision to an algorithm is to condemn without having to explain why.
Justice rests on a principle: every decision must be reasoned, understandable, contestable. When a decision is strongly influenced by an algorithm, that principle enters into direct tension with the logic of AI systems, often opaque.
In Europe, the AI Act (2024) classifies AI uses in the judicial field as "high-risk" and imposes requirements of transparency and human oversight. It is a step forward. But implementation remains gradual.
"An algorithm can calculate risk. But rendering justice is something other than calculation."
In the same spirit: Algorithmic discrimination · Digital surveillance
↑ Contents
Digital sovereignty
Whoever controls the data, the servers and the models also controls part of collective thought.
In 2013, Edward Snowden's revelations brought to light what many people knew only confusedly: global digital infrastructure is not neutral. It is situated, physically and politically.
Digital sovereignty names the capacity of a state, an organisation or an individual to master its data, its tools and its technological dependencies. For European states, most digital services still rest on American or Chinese infrastructures.
"It is not the one who speaks who decides. It is the one who controls the channel of distribution."
In the same spirit: Digital surveillance · AI and political propaganda
↑ Contents
Military uses of AI
For the first time in history, weapons can decide to kill without human intervention.
On 27 March 2020, in Libya, a Turkish Kargu-2 drone, according to a report of the U.N. panel of experts, tracked and attacked enemy fighters autonomously, without real-time human instruction. If that report is correct, it is the first documented case of an autonomous lethal weapon taking a firing decision without a human pressing a button.
The central ethical question is that of human control. The Geneva Conventions presuppose an identifiable human responsibility for every act of war. When the decision is taken by an algorithm, that responsibility fragments and disappears.
"A weapon that decides on its own to kill does not pose a technical question. It poses a question of civilisation."
In the same spirit: Digital sovereignty · AI and justice
↑ Contents
AI and political propaganda
Producing millions of personalised false messages now costs a few dollars and a few seconds.
In 2016, the Cambridge Analytica affair revealed how the data of 87 million Facebook users could be used to target American voters. Ten years later, what required major resources has become accessible to almost any organisation.
Generative AI has transformed propaganda in two ways: it has driven its marginal cost close to zero, and it has improved its credibility. Princeton researchers (2023) showed that political messages generated by GPT-4 were judged more persuasive than messages written by humans.
"When each voter receives a reality made to measure, there is no longer public opinion. There are electoral markets."
In the same spirit: Fake news and disinformation · Information bubbles
↑ Contents
To situate the approach: A Digital Ethic.