Debates about artificial intelligence are often saturated with technique. We hear about performance, models, use cases and regulation. More rarely do we hear the first question: what is it doing to us, and what kind of social world is it helping to produce?
If this text insists on asking the question in those terms, it is because it is not enough to ask how to use AI or how to regulate it legally. We first have to ask what it is doing to our ways of thinking, learning, feeling and living together. The digital is not one tool among others. It has become an environment. It reshapes our rhythms, our relationships, our institutions, and even the image we form of ourselves. Major transformations are first visible in bodies before they are formulated in discourse. Before the words arrive, something is already visible among the young and the not-so-young alike: a more diffuse fatigue, a sharper impatience, a growing difficulty in sustaining thought over time, and that habit of looking for the screen before looking for the other person.
This text is neither a nostalgic refusal nor a hymn to progress. It begins from a simple observation: we have entered a mutation that touches the very conditions of thought, autonomy and common life. The question, then, is not whether we should love or hate AI and algorithms, but what we want to preserve in ourselves as we integrate them into our lives.
The diagnosis
The collision of temporalities
The main point of tension lies, it seems to me, in a collision of temporalities. On one side there is biological time: the time of sleep and waking, of attention, of learning through repetition, slowness and maturation. On the other side there is algorithmic time: instantaneity, permanent availability, continuous acceleration, optimisation without rest.
These two temporalities are not simply different. They collide. Yet school, education, the formation of judgement and even interior life all presuppose a relation to time that digital logic tends to weaken. The brain, especially the adolescent brain, does not develop at the rhythm of servers. It needs pauses, prolonged effort, sometimes forgetting, and moments of return to self. By contrast, digital environments tend to neutralise waiting, saturate attention and make silence look suspicious. The result is visible everywhere: sleep debt, fragmented attention, intolerance of boredom, and a growing difficulty in sustaining intellectual effort without assistance. The digital world has fractured our time.
Digital anomie
This collision of rhythms is not merely an individual problem. It belongs to a collective disorder. Durkheim used the term anomie to describe the state of a society whose norms disappear more quickly than new ones can be formed. That is precisely what the digital environment has accelerated: a world in which uses spread before common reference points have had time to take shape. In schools, this anomie appears in very concrete forms: cyberbullying, erosion of empathy, disinformation, a crisis in our relation to reality, the blurring of intimacy, chronic anxiety and a general sense of disorientation. None of this is a secondary accident.
It is the structural effect of a massive integration of digital systems into a society that has not yet produced the symbolic, educational and political frameworks corresponding to that mutation.
The Covid crisis accelerated this movement further. It did not create these fragilities; it revealed and amplified them. We brought powerful technologies more quickly than ever into already fragile lives, without a compass.
The dispossession of the educational subject
One of the most worrying points is perhaps this: as systems grow in power, we risk losing the habit of exercising some of our own faculties. It becomes tempting not to memorise, not to formulate, not to search, not to judge for oneself, since a machine can propose, calculate, summarise and orient.
The paradox of automation, described long ago, is simple: the more efficient a system becomes, the less human beings maintain the skills required to do without it. What presents itself as an increase in our capacities can become, if we are not careful, a silent atrophy. This dispossession is not only cognitive; it is institutional as well. When algorithmic systems intervene in orientation, evaluation, candidate sorting or the setting of priorities, they weigh on real lives. The more opaque these mechanisms are, the more the autonomy of subjects is reduced without their even being able to name what is escaping them.
Neither technophobic nor technophilic
In the face of this diagnosis, two symmetrical attitudes seem insufficient to me. The first consists in refusing the world as it is, in the illusory hope of restoring a time before AI. The second consists in celebrating every innovation as progress in itself. One flees reality; the other is no less naive.
We need to hold a more demanding line: to think with the tools of one's own time without handing over our freedom to them.
Seven landmarks for a digital ethic
I propose here seven landmarks for a digital ethic capable of protecting us against this digital tidal wave.
1. Vulnerability
Our dignity is not first understood through power, but through vulnerability. We are beings who tire, who are impressionable, who depend on bonds, and who are exposed both to suffering and to influence. A technology that deliberately exploits that vulnerability by capturing attention, organising dependence or exhausting vital rhythms poses a major ethical problem. Dignity is not an abstraction. It is measured by what is concretely inflicted on our lives.
2. Autonomy
Autonomy does not mean the isolation of an individual with no ties. It designates the capacity to govern oneself, to answer for one's actions, and to remain the author of one's choices. An ethic of AI should therefore ask, each time, a simple question: does this technology strengthen or erode the capacity of persons to think for themselves, deliberate, truly consent, and assume responsibility for their decisions?
When a technology durably weakens that capacity, it must be revised.
3. Hybrid solidarity
Durkheim distinguished mechanical solidarity, unity through resemblance in traditional societies, from organic solidarity, unity through complementarity of functions in modern societies. I propose a third stage: hybrid solidarity, characteristic of the digital age. In this new configuration, interdependence transcends the human circle and comes to include technical systems endowed with autonomy. AI and its algorithms become actors in their own right with whom we establish complex relations of interdependence: we depend on them to orient our choices, classify our information, assess our competences and decide our directions. This reality is neither good nor bad in itself. It simply is. The ethical question is this: how can we organise that interdependence so that it serves human emancipation rather than a new form of submission?
4. Digital metacognition
I call digital metacognition the capacity to observe oneself in one's technological uses: to look at what tools do to our attention, our mood, our relationships, our sleep and our relation to knowledge, and then to adjust our practices accordingly.
This is not a moral posture reserved for a few experts. It is a competence to be cultivated. We will never develop a solid digital autonomy so long as we cannot lucidly describe the effects of our own uses.
5. Dialogic exploration
Conversational AI can be used in two poor ways: as a machine that produces in our place, or as a mere dispenser of answers. I believe a third path is possible, and more fruitful: to use it as a space of exploration.
The point is no longer to delegate thought, but to set it in motion. To clarify an intuition, bring a blind spot to light, ask for an objection, test a hypothesis, explore several formulations, look for a point of friction: dialogue with the machine becomes useful when it returns to human thought the responsibility to judge, choose and assume.
From that point of view, the conversation has value only if it helps us move forward.
6. Deliberation
The norms that govern our relation to the digital world cannot come only from platforms, experts or states. They must also be built by the communities concerned. We need to reopen spaces of deliberation about our uses, our limits, our refusals and our priorities. In schools, associations, families and institutions, this requires living charters: argued, discussed, revised. Not lists of prohibitions imposed from above, but frameworks of meaning capable of explaining why we accept certain practices and why we refuse others. This principle is also a response to what I call technological liberalism: the ideology that leaves each individual alone in front of digital choices, as if technology were a private matter. It is not. It is a total social fact. It calls for collective responses.
7. Education
The decisive transformation will not be merely technical. It will be educational, or it will not happen at all. It is not enough to train competent users; we must form subjects capable of discernment, restraint and critical spirit.
Digital education should therefore not be limited to the technical learning of tools. It should aim at the capacity to inhabit a technical environment lucidly: to know when to use AI or not, why to do so, and under what conditions. Digital education must become a national priority, not education in the use of tools, but education in digital metacognition, critical thinking and enlightened autonomy. That is not the same thing. The first forms users; the second forms lucid citizens. The difference is decisive.
What this requires in practice
A digital ethic is worth nothing if it remains purely declarative. It calls for clear political translations.
First, transparency in algorithmic systems that have important effects on lives. When an algorithm helps orient, select, capture attention, evaluate or exclude, its functioning must be explainable. In a democratic society, opacity is not a technical detail; it is a form of power.
Next, transparent regulation of the attention economy. Systems designed to capture attention for as long as possible, especially that of children and adolescents, are not neutral. They exploit well-known psychic and temporal fragilities.
Finally, the protection of data must be understood as a question of dignity. What I read, what I seek, what I feel, what I try to understand should not spontaneously become the raw material of a market. Human interiority is not a deposit to be extracted.
Conclusion
We need a form of progress that knows what it must preserve. Not out of nostalgia, but out of responsibility. There are things a society has no right to sacrifice in the name of fluidity, performance or innovation: the long time of formation, the possibility of silence, the thickness of relationships, the capacity to judge, and the dignity of interior life.
Hannah Arendt reminded us that education is first of all an act of protection. We must hear that formula in all its radical force: protecting children from a world that solicits them without measure, but also protecting the world from what it becomes when it forgets to transmit anything other than tools.
A digital ethic is not a luxury. It is a political necessity. Faced with technologies that reconfigure the very conditions of thought, relation and decision, the most important question remains a human one: what do we want to become?
And that question cannot be delegated to an algorithm.