Ahmed Messaoudi · Essay

Dialogic Exploration

Using dialogue with AI to set one’s own thinking in motion.

Thinking with AI  ·   ·  12 min read

Introduction

What are we doing when we converse with a machine that answers back?

The problem

The arrival of conversational artificial intelligence in educational practice raises a simple yet radical question: what are we doing when we enter into dialogue with a machine that answers?

The usual responses oscillate between two poles. On one side stands technological enthusiasm, which sees AI as an assistant capable of automating certain cognitive tasks. On the other stands pedagogical anxiety, which fears a delegation of thought and a weakening of intellectual autonomy.

Yet these two positions share the same presupposition: they both treat AI as a tool, something one adopts or refuses, something one masters or that escapes one’s control. But that category of tool may not adequately describe what really happens in interaction with a conversational system.

A hypothesis

This text proposes another reading. Conversational AI systems are not merely tools that execute. They are frameworks for exploration: exploration of one’s own ideas, the limits of one’s reasoning, unexpected perspectives, territories of knowledge, and possible lines of action.

We call dialogic exploration the practice of using dialogue with an algorithmic system not in order to obtain answers, but to set one’s own thinking in motion.

It names a deliberate use of dialogue as a method of discovery: discovery of what one thinks, of what one does not know, of what one might think differently.

Dialogic exploration differs clearly from three common uses: simple consultation, which seeks precise information and stops at the answer obtained; delegation, which hands over to the machine the production of a text, a calculation or a decision; and ordinary conversation, which exchanges for the pleasure of exchanging, without a structured intellectual intention.

An intellectual lineage

This proposal belongs to a tradition that understands thought as dialogue. From Socratic maieutics to Vygotsky’s work on mediated action, the idea that thinking is formed in exchange, with others, with tools, with signs, is not new.

What is new is the permanent availability of an algorithmic interlocutor capable of reformulating, questioning, contradicting and proposing. That availability alters the conditions of intellectual exploration. It does not replace thinking; it changes its ecology.

Vygotsky showed that thought first develops between individuals before becoming internalised. Conversational systems open an interesting prospect here: they constitute an intermediate space in which thought can be externalised, confronted and reformulated before returning to its author enriched by the exchange.

Bernard Stiegler reminded us that every technique is a pharmakon, both remedy and poison. Conversational AI is no exception: it can amplify thought just as it can atrophy it, depending on the posture from which one enters into it.

Part I

The conceptual frame: the dialogic rosette

To speak of dialogic exploration is not enough. We still need a way of distinguishing the various forms that this exploration can take. For dialogue with a conversational system is not a uniform practice: depending on what we ask of it, depending on the posture we adopt, the dialogue produces very different effects on thinking.

The model of the dialogic rosette offers a map of the main cognitive functions that dialogue can activate, by organising them around a centre: human thought. To place human thought at the centre means that the conversational system is never the author of the reasoning; it is its interlocutor, mirror, critic or amplifier. Intellectual responsibility remains on the side of the person who is conversing.

The rosette organises seven functions. These functions are not rigid categories; they overlap, combine and succeed one another within a single conversation. But naming them makes them recognisable, and therefore available for deliberate use.

The mirror Reflexivity
Friction Critique
The prism Perspectives
The co-pilot Action
The teacher Knowledge
The co-author Creation
The arbiter Judgement

The dialogic rosette: seven functions around human thought

1. The mirror, reflexivity

The conversational system is asked to reformulate what has just been expressed, to make visible the implicit structure of a line of reasoning, to make explicit the presuppositions of an idea. Asking AI, “Rephrase this idea and explain what it presupposes,” is not delegation: it is an exercise in reflexivity. The reformulation becomes an object on which thought can work, correct and refine.

2. Friction, critique

The system is invited to adopt a critical posture: to challenge an argument, identify its weaknesses, propose an objection, play the devil’s advocate. This function may be the most precious, and the least intuitive. We spontaneously expect AI to validate, complete and approve. Asking it to contradict us requires a shift in posture. Dialogic friction is not an attack; it is a service rendered to thought.

3. The prism, exploration of perspectives

The system is invited to break down a question from several disciplinary, cultural, historical or theoretical angles. The prism does not dissolve thought into a plurality of equivalent points of view; it prepares thought to choose with greater awareness.

4. The co-pilot, exploration of possible actions

The system is invited to accompany a process of decision or planning: identifying options, evaluating scenarios, structuring an action plan. The co-pilot does not decide in one’s place; it explores the field of possibilities alongside the person who decides. Responsibility for the choice remains entirely human.

5. The teacher, exploration of knowledge

The system is invited to explain, contextualise, connect concepts and make unfamiliar territories of knowledge accessible. Exploring knowledge through dialogue differs from a search-engine query: it allows one to adjust the level, ask follow-up questions and connect what one is learning to what one already knows.

6. The co-author, creative exploration

The system is invited to take part in a creative process: proposing formulations, exploring forms, generating variations. What matters is this: the co-author proposes, suggests and explores, but it is the human author who chooses, validates and assumes responsibility. Creation remains an act of responsibility that the system cannot take over.

7. The arbiter, exploration of what is most just

The system is invited to help evaluate, weigh arguments, formulate a synthesis and distinguish the stronger from the weaker within a set of positions. The dialogic arbiter is an aid to judgement, not a substitute for it. It helps us see more clearly; the decision remains human.

· · ·

The dialogic rosette is not a programme to be followed linearly. It is a compass. It allows us to name what we are doing when we dialogue with AI, to choose deliberately one posture rather than another, and to vary the angles of exploration according to need. It makes the space of possibilities visible, so that each person can navigate it consciously.

Part II

A taxonomy of practices: the seven families of use

Dialogic exploration cannot be reduced to a general posture. It unfolds in concrete, recognisable practices that can be named, taught and developed. This taxonomy does not dictate what must be done; it makes visible what can be done.

Family 01

Knowing oneself

AI as mirror

This is the most fundamental family, and paradoxically the least intuitive. We do not naturally think of using AI to understand ourselves better. In all these uses, AI is not the source of the answer: you are. AI is the framework that allows you to see yourself from the outside.

The recursive loop

You do not know how to formulate what you want. So you ask AI to help you formulate it. Through successive back-and-forths, your request takes shape. What is remarkable is that the work of formulation is already part of the thinking itself.

The principle folds back on itself: the tool is used in order to learn how to use the tool more intelligently.

The raw mirror

Your thought is confused, incomplete, perhaps contradictory. You ask AI: “What am I trying to say?” It reformulates. And often you reply: “No, not quite...”, and that very “no” brings precision. Disagreement with the mirror can be as valuable as agreement with it.

The revealer of values

You describe a situation, a choice, a discomfort. Then you ask AI: “What values or beliefs seem to guide what I am saying?” AI does not tell you what to decide; it tells you who you appear to be within the decision.

The translator across levels

You ask AI to explain something at different levels of complexity. The real use is to locate yourself: the point at which you lose the thread is the point at which your genuine zone of ignorance begins.

The interlocutor

You describe an emotion or a situation and, instead of asking for advice, you ask for questions you have not yet asked yourself. It is a form of inner clarification, without AI pretending to be a therapist.

Family 02

Testing one’s ideas

AI as friction

This is the family of sparring, resistance, of AI that does not tell us what we want to hear. An idea that has never been tested is a fragile idea. Dialogic friction is not an attack; it is a service rendered to thought.

The sparring partner

You ask AI to give you the strongest counterarguments to your idea. Not so that you abandon it, but so that you can strengthen it by identifying weak points before someone else does.

“Give me the five strongest arguments against this idea.” You do not give up; you arrive better prepared.

The targeted devil’s advocate

More surgical than ordinary sparring. You target a specific point that already feels weak in your reasoning. AI identifies precisely the crack you sensed without being able to name it.

Socratic questioning

AI asks you questions that put your reasoning to the test. It is gentler than sparring, but sometimes more unsettling, because a good question reveals an assumption that had never yet been made conscious.

“Ask me Socratic questions about this decision. Do not give me advice, only questions.”

The stress test

You ask AI to imagine all the scenarios in which your plan could fail, not to discourage you, but to identify fragile points before you begin.

The benevolent dissenter

You ask AI to point out inconsistencies, blind spots and untested assumptions in your reasoning. Like an intelligent friend who cares for you but does not lie to you.

Family 03

Seeing otherwise

AI as prism

This is the family of perspective. Where the first two families turn you back towards yourself, this one opens you outward, towards angles you might never have thought to explore alone. AI becomes a kind of intellectual kaleidoscope.

The alter ego

You ask AI to embody a precise role in order to obtain a point of view that would not come naturally to you. To read a text “as a cynical journalist” and then “as an enthusiastic beginner” gives you a fuller map of your own production.

Future scenarios

You ask AI to imagine several possible futures arising from one decision. The intermediate scenarios are often the most useful: neither dream nor nightmare, but the genuine complexity of the path.

The change of scale

You ask AI to zoom in or zoom out on your problem. Sometimes the answer is that the detail on which you are stuck simply does not matter as much as you thought.

The translator of worlds

You ask AI how your problem would be approached in a completely different domain.

How would an orchestra conductor solve this problem? And a sports coach? Each metaphor reveals a different lever for action.

Reversal

You take a hypothesis that everyone considers true and ask AI to reverse it completely. The exercise reveals spaces that had remained unexplored.

Family 04

Acting and deciding

AI as co-pilot

This is the most concrete family. We leave pure reflection and enter action. But the co-pilot does not decide in your place: it explores the field of possibilities alongside the person who decides.

The action plan

You ask AI to break down an objective into concrete, ordered steps. It plays the role of the experienced project lead you do not happen to have at hand.

Assisted decision-making

You ask AI to help structure a difficult decision, not to decide for you, but to make the choice more legible by building a grid of criteria weighted according to your values.

The rehearsal partner

You ask AI to play the role of your interlocutor so that you can rehearse before an interview, a negotiation or a difficult conversation.

“Play the role of a sceptical and hostile investor. What hard questions might you put to me?”

The first-draft writer

You freeze before the blank page. You ask AI for a rough first draft, not in order to copy it, but to have something to react to. The first draft becomes your starting point.

The silent project manager

You ask AI to identify forgotten risks, stakeholders who have not been consulted, and steps that were skipped. It spots blind spots that the enthusiasm of the outset often causes us to neglect.

Family 05

Learning

AI as personalised teacher

This is the family that turns AI into a made-to-measure learning tool. Unlike a book or a video, AI adapts to you: your level, your rhythm, your way of understanding. It is a kind of democratisation of one-to-one tutoring.

Maieutics

You ask AI not to explain, but to guide you towards understanding by yourself. You reach the definition through your own reasoning; it then becomes truly yours.

The adaptive teacher

You describe your level and your way of learning, and AI adjusts its teaching continuously. You understand better because the explanation is made for you, not for a generic audience.

The tailored analogy

When a concept resists, you ask AI to explain it through an analogy drawn from your own world.

“I am a baker. Explain an algorithm to me through a bakery analogy.” “An algorithm is your bread recipe.”

Pedagogical role-play

You ask AI to build a simulation around the concept you want to master. What you have experienced, even fictitiously, tends to stay with you more deeply than what you have only read.

The understanding checker

You explain to AI what you believe you have understood, and it identifies what is accurate, incomplete or misunderstood. It is often far more efficient than rereading everything from the start.

Family 06

Creating

AI as co-author

AI does not create in your place; it creates with you, like a creative partner available at any hour, without ego and without fatigue. The creation remains yours; AI is simply the workshop in which you are working.

Augmented brainstorming

You ask AI to generate ideas in quantity, and you choose, combine and transform. Out of twenty ideas, perhaps eighteen will not suit you. But the two that remain might have taken you hours to find alone.

The creative constraint

Creativity often flourishes more under constraint than under total freedom. You ask AI to impose unusual rules of the game, and those constraints force your mind to leave its habitual tracks.

Remix

You take something that already exists and ask AI to transform it into another register. Often the remix reveals what the original was really trying to say.

The co-writer

You write something and then stall. You ask AI to suggest a direction or a variation. Not to replace your voice, but to relaunch it.

The constructive critic

You ask AI to assess honestly what you have produced. AI reads like an attentive and demanding reader, something our relatives rarely do out of kindness.

Family 07

Choosing and arbitrating

AI as impartial judge

Sometimes we need a third party with no stake in the situation, no affect, no loyalty. AI has no friend to spare, no career to protect. It offers a form of intellectual justice that human beings often struggle to offer one another.

The arbiter between two options

You present both options to AI and ask what logical reasoning suggests, independently of your emotions. The answer does not decide for you, but it separates what belongs to reason from what belongs to feeling.

“If you remove fear and emotional attachment, what does pure logic say?”

The mediator

You present two conflicting points of view as faithfully as possible and ask AI to identify what is legitimate in each. AI often shows that both are right on different aspects, and that recognition itself can defuse the conflict.

The impartial evaluator

You want an honest assessment, neither the polite encouragement of relatives nor the severity of a hostile judge. A fair evaluation, grounded in clear criteria. The grade matters little; it is the justification that helps one improve.

The trusted third party

In certain delicate situations, one needs to express something to someone who will never be involved, will not judge, and will repeat nothing. AI plays that role of neutral confidant, not to give advice, but to allow thought to be deposited somewhere.

Conclusion

Towards dialogic autonomy

These seven families are not a closed list. They are an invitation to explore one’s own way of dialoguing with AI. Each person, according to temperament, needs and habits of thought, will find some families more natural than others.

What runs through all these practices is the same conviction: dialogue with a conversational system can be far more than a search for information or a delegation of tasks. It can become a method of thought, a way of setting in motion what one knows, what one feels, what one has not yet managed to formulate.

The dialogic rosette and the taxonomy of families are two ways into that space. The rosette offers a map of functions, of what dialogue can do for thought. The taxonomy offers a map of practices, of how to activate those functions concretely. Together they form a framework for what we might call dialogic autonomy: the capacity to use dialogue as a tool for intellectual development without ever turning it into a substitute for the responsibility to think.

The challenge is not to use AI as much as possible. It is to use it deliberately, consciously, and in ways oriented towards the growth of one’s own thought. To learn to think with machines without ever ceasing to think for oneself.

For the broader ethical frame of that method, see A Digital Ethic. For a fictional inquiry into judgement, plausibility and machine-assisted writing, see Sherlock Holmes Faces AI.

Further reading

Mediation through tools

Vygotsky, L., Thought and Language

Rabardel, P., Les hommes et les technologies

Technique as pharmakon

Stiegler, B., Prendre soin de la jeunesse et des générations

Simondon, G., Du mode d’existence des objets techniques

Dialogue as a pedagogical method

Freire, P., Pedagogy of the Oppressed

Dewey, J., the notion of inquiry

Burbules, N., Dialogue in Teaching

AI in education

Luckin, R., Machine Learning and Human Intelligence

Holmes, W., Bialik, M. & Fadel, C., Artificial Intelligence in Education

By the same author

Messaoudi, A., Réinventer l’école à l’ère de l’intelligence artificielle, L’Harmattan, 2025

Context

This text continues a line of work pursued for nearly thirty years within the French state education system, first as a senior education adviser, then as a headteacher in lower and upper secondary schools.

Dialogic exploration extends the reflection begun in:

« Réinventer l’école à l’ère de l’intelligence artificielle »
Ahmed Messaoudi, L’Harmattan, 2025

That book sets out the institutional and critical stakes of digital technology at school. The article above proposes a concrete method that each reader can adapt to their own needs.

Consult the book →

Read next

Thinking Against the Algorithm →

To situate the approach: A Digital Ethic.