The Chinese room is an argument created by philosopher John Searle. It states that computer programs will never have consciousness, despite appearing so to an external human observer. Arguments about strong artifical intelligence, as well as the philosophy of mind commonly cite it.
Chinese room logic
The argument is as follows:
- Hanzi characters are inserted as input into a chatbot-esque computer program.
- The computer program acts on its preset logic and produces Hanzi characters as output.
- A Chinese-speaking human reading the output believes the text output to be generated by another Chinese-speaking human. The logic of this program is thorough enough that it passes the Turing test.
I now pause here to stress that the program does not actually know Chinese, nor does it know human conversation. It has been programmed with enough sophistication that it can give the illusion of both.
An English-speaking person is placed into a closed room. The room has:
- A slot for receiving input for the computer program.
- A copy of the computer program’s logic, painstakingly written out in detail in English.
- Office supplies, including paper, pencils, and erasers.
- A slot for delivering a message to a Chinese-speaking human.
- Hanzi character input is inserted into the room via the receiving input slot.
- The English-speaking human looks at the message and makes note of each glyph.
- The English-speaking human reads through the English-explained version of the computer program’s logic and diligently follows the instructions it has about the pertinent glyphs.
- The English-speaking human draws the relevant Hanzi characters identified by the English-explained version of the computer program.
Pausing again to to note that the English-speaking human is able to look at the symbols the English-explained version of the computer program’s logic identifies as what output should be, and can manually recreate it on paper.
- The drawn characters are then fed through the second slot to deliver messages to a Chinese-speaking human.
- The Chinese-speaking human reads the message and interprets it as human conversation.
I will stress that in this model, the English-speaking human does not know Chinese, nor can they write it. They are, however, perceived to be conversationally fluent. This person, as well as the program, are both simulating the ability to understand Chinese.
Knowing and understanding
Knowledge sets up understanding. Having information about something is not the same as understanding it.
However, a person who knows something and a person who understands something can both act on intentionality, the power of a mind to form a belief or desire.
Sphexishness refers to the behavior of a genus of wasps who inspect their nests before taking captured prey into it. This behavior looks highly intentional at first, but moving the prey away from the nest causes the wasp to repeat the same exact inspection act. This act can be triggered multiple times in succession.
The parameters of the prey, the wasp, its nest, and the distance of the prey to the wasp can all be variable, but the behavior is a trait of the genus. This behavior has the appearance of understanding, but it is more on the knowing end of the spectrum. While it isn’t mindless per se, it also isn’t an execution of free will.
Some eukaryotic organisms can form multicellular communities that operate as a single entity. These are commonly called slime molds.
The eukaryotic organisms making up a slime mold operate independently, but in doing so create emergent behavior. This behavior can model complex adaptive systems, including solving the traveling salesman problem and mapping the structure of our universe.
Slime molds are not aware of the problems they are set to solve, nor are they aware of the way in which they do it. However, they are adaptable enough to go beyond variance in environment large enough to disrupt sphexish behavior.
The problem of other minds
The totality of experience that leads to an individual’s discrete actions is unavailable—you can only form your own working model of someone else’s congition by observation of their behavior.
The problem of other minds means we can never truly know someone other than ourselves (and even then, I’m skeptical). Only by producing behaviors and artifacts can we control the narrative of how we wish to be known.
Either put on these glasses or start eating that trashcan
I think about these things a lot when I think about:
- Contemporary development and design,
- The organizations these behaviors are practiced in,
- Personal, team, department, and interdepartmental workflows,
- What our industry considers best practices,
- Goodhart's law,
- The formation and acceptance of algorithms and artificial intelligence, and
- Claims of skill and expertise.
Now you are thinking about these things too.