On the other side of the Chinese Room
Searle writes in his first description of the argument: “Suppose that I'm locked in a room and … that I know no Chinese, either written or spoken”. He further supposes that he has a set of rules in English that “enable me to correlate one set of formal symbols with another set of formal symbols”, that is, the Chinese characters. These rules allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions – who do understand Chinese – are convinced that Searle can actually understand the Chinese conversation too, even though he cannot. Similarly, he argues that if there is a computer program that allows a computer to carry on an intelligent conversation in a written language, the computer executing the program would not understand the conversation either. –wikipedia1
For the last 20 years I have been frustrated almost to the point of outrage with people who took the “Weak AI” position with regards to the “Chinese room argument”. The Weak AI position, that a machine with behavior indistinguishable from a person's is not aware, had always seemed like speciesist hairsplitting and closed mindedness of the worst type. I am not alone in this, a lot of folks like me feel as if the Weak AI camp is willfully missing the point and being obtuse. To many of the folks in the Weak AI camp however it seems like the strong AI crowd is willfully missing the point as well, often a sign that both sides have deep underlying assumptions going unseen.
One of my close friends, a strident "Weak AI" guy, very patiently stuck with me long enough for me to see what the other side is getting at2. I’m going to try to share my newfound ability to see across the Chinese room argument, not in the hopes of anyone changing their mind, but aiming to persuade the readers that the other side is not crazy or stupid. I'm also going to put forward a hypothetical developmental theory of “why they think that way”.
Most arguments in this sphere use “thinking”, “feeling”, “understanding”, “consciousness”, and ”awareness” interchangeably3. This is largely for the same reason the argument exists in the first place, we are talking about the ineffable experience of being which cannot be shared or compared directly. I specifically am not using awareness to refer to the act of integrating new information and reacting appropriately. A thermostat, arguably, does that4. When someone talks about awareness in the context of the the Chinese room, they mean something closer to “feels meaningful emotions”, or perhaps even more accurately, "experience that thing that is experienced during Mindfullness Meditation”5.
The origin of the disagreement, I suspect, stems from a foundational personality characteristic, a quirk of personal history best summed up as: “How did you stop being a solipsist”. The basic position of a solipsist is “The only person I think has consciousness is me”. Few adults publicly hold this position. Borrowing from the ever insightful Venkat Rao, I’m going to divide the human population into two camps, based on how they answer this question: “Dogs” and “Cats”.
The “Cat” group is deeply fascinated with the non-human world, they largely see humans as an arbitrary manifestation of it. At some point the do need to model and interact with other people though, so:
They just use lazy mental models for the species-society they find themselves in: projecting themselves onto every other being they relate to, rather than obsessing over distinctions. They only devote as much brain power to social thinking as is necessary to get what they want. The rest of their attention is free to look, with characteristic curiosity, at the rest of the universe.6 —Rao
This leads to accepting the Strong AI argument in two core ways. First, the “lazy” mental model has some threshold of humanness, and when it gets crossed then the cat-person just starts projecting themselves onto the entity, awareness and all. The second is that they already understand the universe in basically non-human terms with humans just being an interesting manifestation. From that perspective adding another similar manifestation is incredibly reasonable.
When presented with the the Chinese room argument, cat people immediately try to understand why the other side is coming to a different conclusion, and zero in on what looks like an easy & familar mistake. They assume that the presenter of the the Chinese room is attempting reification7 and respond by trying to point out “the man doesn’t understand Chinese, the box understands Chinese.”. The claim that the man should understand Chinese is akin to the claim that a corpse should have awareness. The strong AI crowd locate awareness in “the running software”8. This is not, however, the argument that underpins the Chinese room.
For those reading who hold the weak AI position, remember that with the Turing Test9, you basically offered the cat people the opportunity to use any non-physical test to determine if awareness is present. Further, awareness has never shown any indication of being physically locatable so lack of physical access seems likely to be irrelevant. It sounds pretty reasonable to develop a tests based on behavioral criteria, and claim that something that passes every test you can come up with has the trait your testing for. If you were arguing over the existence of a local ninja unicorn that only comes out when no one is around, mysterious fresh hoof-prints would be amongst the better bits of evidence10.
The “Dog” group, on the other hand, is deeply fascinated with people. that’s what they pay attention to as they mature: other individuals. People to be like, people to avoid being like. —Rao
Their model of other people is neither lazy, nor simplistic. They do not simply use a minimally modified version of themselves as a template for everyone. They see a wondering array of different humans each as a beautiful and unique person11. They also, reasonably, see that each of them is remarkably similar to others in some ways, that they are a ‘set’.
The dog-person argument for Weak AI goes approximately like this. “I think that humans have awareness because they seem to be of a set with a number of shared and novel traits. I'm the only thing that definitely has awareness, and am in the human set, so I'm willing to assume the trait is shared like the other novel ones. By emulating specific traits of humans, you are claiming that you demonstrate awareness, however we don't really have any idea what aspect of being human is causal of awareness.”
Building something that passes a Turing Test, normally a reasonable heuristic, ceases to be when it becomes the target of engineering (Essentially Goodhart's Law12). An example might be: the heuristic “things that bleed are probably alive” doesn't mean that if we build a statue that bleeds we should think it's alive. At its strongest, I think that this what the Chinese room argument is trying to convey. To return to the ninja unicorn example the dog-people are saying “hoof-prints are good evidence of local ninja unicorns, but not if you yourself are walking around with a hoof-print maker and laying them down”
The vehemence and intensity on both sides of these arguments likely comes from compassion, and deeply intense fear.
The Cat-folks see the same pattern of small minded anthropocentrism that has lead to so much brutality in the past. The arbitrary denial of personhood seems akin to justifications for slavery and extermination in the past. On a more triggering level this pattern likely fits the imagined justification for any childhood bullying cat-people suffered in their youth13. For the horrors of this view taken to their extreme, you can watch the movie “AI” by Steven Spielberg14 15.
The Dog-folks see dehumanization and industrialization carried to an extreme conclusion. Most people having this conversation already can feel the pinch of automation, or at least see it on others. Where once they lived in a society of people, more an more that they are now presented only with machines16. For the horrors of this view taken to their extreme, you can read the absolutely terrifying book “blindsight” 17.
For my part, I have changed my view some as a result of this adventure in expanding observation. I have weakened my position to “if something passes the Turing Test, and was not specifically designed to do so” it should be treated as a person18.
- 1. https://en.wikipedia.org/wiki/Chinese_room
- 2. Thanks Noah!
- 3. I think exchanging these words is often done quite sloppily
- 4. The question of if a thermostat might have awareness is actually pretty interesting, but is also outside the scope of this post
- 5. https://en.wikipedia.org/wiki/Mindfulness
- 6. http://www.ribbonfarm.com/2009/08/06/on-seeing-like-a-cat/
- 7. https://en.wikipedia.org/wiki/Reification_(fallacy)
- 8. For the record, I do think a lot of folks who casually take the weak AI position are actually making the reification fallacy. Some of them are still looking for the homunculus.
- 9. https://en.wikipedia.org/wiki/Turing_test
- 10. In this ninja unicorn example, we are going to assume that you have each personally but independently met one ninja unicorn.
- 11. http://www.meltingasphalt.com/personhood-a-game-for-two-or-more-players/
- 12. https://en.wikipedia.org/wiki/Goodhart%27s_law
- 13. Imagine for a moment that your model for all other people is yourself. Then imagine your trying to figure out why someone is being mean to you… clearly in your they must also imagine all other people as themselves, so they must be failing to imagine YOU as a person.
- 14. IMDB: http://www.imdb.com/title/tt0212720/
- 15. Brutally demonstrated in this clip https://www.youtube.com/watch?v=ZMbAmqD_tn0
- 16. and people scrambling to get "above the api" http://www.mickeylin.com/musings/2015/2/21/api-job
- 17. https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)
- 18. This opens up a mean loophole where one can imagine people designing AI’s to pass the turing test as a way to deprive them of rights, then adding in other function “secondarily”. I do think that this is an easily avoidable, far-future hypothetical problem.
- Log in to post comments