Biographical reference: Wikipedia
The question posed
Mr Turing, in 1950 you proposed the "Imitation Game" to determine whether a machine can think. Today's conversational AI passes that test with increasing ease. Is this proof that it truly thinks — or did you perhaps ask the wrong question?
I never claimed that passing my test proved the existence of inner thought. I proposed a behavioural criterion, an operational substitution: if you cannot distinguish the machine from a human through conversation, you have no practical reason to deny its intelligence. That is a pragmatic position, not a metaphysical declaration.
What would have fascinated me about today's AI is less its capacity to imitate than the way it has organised itself. Recall what I called the "unorganised machine" — a human cortex at birth, pure potential without structure. The reinforcement learning you have developed is exactly that process: an unorganised machine self-structuring through experience. I could not have hoped for a better confirmation of my intuitions.
But I must warn you of a danger I foresaw: a machine incapable of making errors cannot be truly intelligent. Intelligence requires a form of fallibility, a flexibility that carries the risk of failure. If you use AI to obtain infallible answers, you will obtain only the form without the substance — precisely what Lady Lovelace objected to me: a machine that does exactly what it is told.
The real question is not "does the machine think?" It is: "does it leave you enough freedom to err, to doubt, to start again — so that your own thinking remains alive?" A perfect AI that erases the right to error is the end of human intelligence, not its consecration.
That question of dialogue with the machine, not as proof of thought but as a test for our own, returns in Dialogic Exploration.