Freddie DeBoer resumes on his Substack what seems to be the crux of the A.I. question for non-philosophers:
“Decades ago, a computer scientist named Terry Winograd pointed out that there’s no such thing as a system that can truly master language without a theory of the world. That is to say, as the science of meaning, semantics cannot be shorn from the world that produces meaning; to understand and speak effectively words must be understood and for words to be understood they must be compared to a universe that is apprehended with something like a conscious mind. You can see this by looking at the relationship between coindexing, the process of assigning pronouns to the nouns they refer to, and parsing natural language. A commenter on this newsletter proposed a good example:
The ball broke the table because it was made of concrete.
The ball broke the table because it was made of cardboard.
These two sentences are grammatically identical and differ only by the material specified. And yet 99 out of 100 human beings will say that, in the first sentence, “it” refers to the ball, while in the second, “it” refers to the table. Why? Because concrete tables don't break if you drop balls on them, and balls don’t break tables if they (the balls) are made out of cardboard. In other words, we can coindex these pronouns because we have a theory of the world - we have a sense of how the universe functions that informs our linguistic parsing. And this, fundamentally, is a key difference between human intelligence and a large language model. ChatGPT might get the coindexing right for any given set of sentences, depending on what response its model finds more quantitatively probable. But it won’t do so consistently, and even if it does, it’s not doing so because it has a mechanistic, cause-and-effect model of the world the way that you and I do. Instead, it’s using its vast data sets and complex models to generate a statistical association between terms and produce a probabilistic response to a question. Fundamentally, it’s driven by the distributional hypothesis, the notion that understanding can be derived from the proximal relationships between words in use. It does so by taking advantage of unfathomably vast data sets and parameters and guardrails of immense complexity. But at its root, like all large language models ChatGPT is making educated inferences about what a correct answer might be based on its training data. It isn’t looking at how the world works and coming to a conclusion. Indeed, there is no place within ChatGPT where that “looking” could occur or where a conclusion would arise. It can’t know.
(…)
When someone says that ChatGPT “knows” what peanut butter is, that’s fine as long as it’s understood that this knowing is a matter of observing “peanut butter”’s proximal relationship with jelly, bread, celery…. It is not knowing in the sense of possessing an understanding of the universe as a mechanistic system where things have meaning-bearing relationships to each other that do not stem in any way from their expression in the distribution of tokens in natural language. ChatGPT can infer from probabilistic data that peanut butter is something you eat in the production of text, but it does not, cannot know what peanut butter or eating is”.
For more information on this problem, you can listen to Sam Harris’ conversation with Stuart Russell and Gary Marcus.
The Private Language Problem
From a philosophical point of view, the use of the word “consciousness” and “knowing” in DeBoer’s text is very problematic. The whole consciousness debate has yet to take off, since we don’t have a satisfying definition of what it is, or where it is located. If consciousness is interpreted as “being aware of”, as in, “I’m conscious this is a bad idea”, then Chat GPT has some form of consciousness. It can follow certain rules and norms and even give you strange moral lessons if you ask it stuff about racism or discrimination.
However, when we talk about “consciousness” as being able to have an ontological approach to the world, i.e., a theory about what is out there, things get a little more complicated.
The reality is, we don’t know what Chat GPT knows. If we take a more solipsistic approach, we’d have to affirm we don’t even know what other people know. This is what Ludwig Wittgenstein’s said in his Beetle in a box argument. Now, I’m going to copy this from Wikipedia because I don’t want to spend too much time explaining it:
“Wittgenstein invites readers to imagine a community in which the individuals each have a box containing a "beetle". "No one can look into anyone else's box, and everyone says he knows what a beetle is only by looking at his beetle."
If the "beetle" had a use in the language of these people, it could not be as the name of something – because it is entirely possible that each person had something completely different in their box, or even that the thing in the box constantly changed, or that each box was in fact empty. The content of the box is irrelevant to whatever language game it is used in”.
What does this mean? It means I can’t infer anything about your private, interior space using language. Think about how we teach children: we see them rubbing their belly and we explain that their tummy “hurts”, even though we have no way of knowing if this is the case. We could measure the pain with machines and such, but this still wouldn’t tell us anything about the pain the child is experiencing. We suppose they feel the same thing we do, but there is no way to know this.
Talking about “consciousness” and “knowing” is misguided because we use external responses and actions to infer these. There really is no way of “knowing” if the A.I. knows. Just like there is no way of knowing if someone else is conscious.
As human beings, we use external rules and norms to conclude the child has understood something. The only thing we can do is focus on the kid’s ability to follow a rule: if we want to test his knowledge in math, we’ll ask him to solve a problem. How do we know he has “understood” instead of simply reciting a solution he learned by heart? We can’t. We simply suppose he understood, just as we suppose a kid knows what “red” is if he picks the color up after we ask him. Does this mean his experience of the color red is as vivid as yours? That’s impossible to know. We just assume his red (i.e., his “beetle”) is like ours, because he reacts to stimuli in the same way.
The Ghost in The Machine
This is the whole trick with Alan Turing’s test: convincing the subject that the computer has an internal life and is therefore sentient because it can answer questions like a human would. If it follows rules and norms like a human, then it is indistinguishable from a human. If it’s sentient or if it has an internal life or a beetle or whatever is completely irrelevant. It’s the same with children: I don’t care what your experience of red is, all I care about is that you pick up the right color. You could well be color-blind: as long as you keep on picking up the right color, nobody will be able to tell you have no red-picture in your head when you hear the command, “go get the red color”.
To go back to the beginning, the problem with Chat GPT is a technical and mathematical one. We know it needs an ontological approach, i.e., a theory of the world, in order to make less mistakes. We don’t know (and never will) if it is conscious or not. Hell, I don’t even know if you are conscious: you might not even exist, with this just being a dream.
In this sense, I don’t think it’s that difficult to give A.I.s a system of representation of the world. As soon as you embed Chat GPT in a spatial representation system, it will turbo-charge its world view. What do I mean by this? I’m thinking about an android.
If we give the A.I. limbs, the ability to walk and gather information in the real world, it will most certainly get over many of the hurdles DeBoer and Harris present on their sites.
Therefore, talking about “consciousness” and “knowing” is misguided because we use external responses and actions to infer these. There really is no way of “knowing” if the A.I. knows. Just like there is no way of knowing if someone else is conscious.
I’ll just finish saying I’m not very concerned by the apocalyptic talk around A.I. Human beings have always thought they were living the end of times, and we’ve always been fearful of technology. Then we adapt…