Sunday, June 01, 2014

Chinese Room Argument

The Chinese room argument is a thought experiment. It was first proposed by the US philosopher John Searle (pictured) in the journal Behavioural and Brain Sciences in 1980 , in which many people feel he thoroughly disproved the notion that any computer program could acquire true intelligence. It is one of the best known and widely credited counters to claims of artificial intelligence (AI) - that is, to claims that computers do or at least can (someday might) think.

It was written to demonstrate a simple point - intelligent behaviour does not equate to intelligence. This doesn't mean AI design is impossible, but that a behavioural-based model for intelligence is flawed.

Imagine yourself a monolingual English speaker, ''locked in a room, and given a large batch of Chinese writing'' plus ''a second batch of Chinese script” and ''a set of rules'' in English ''for correlating the second batch with the first batch.'' As Searle explains how it works: ''Suppose that unknown to you the symbols passed into the room are called 'questions' by the people outside the room, and the symbols you pass back out of the room are called 'answers to the questions' ''. Just by looking at your answers, nobody can tell you ''don't speak a word of Chinese.''

The point he makes is that you may hand out the appropriate and even accurate answers and that those responses may serve to connect with the expectations of those asking the questions.  However, it does not indicate that any real understanding has taken place or that any sort of meaning is actually attached to the question and answer process that is taking place.



 
It should be conceded that Searle's argument is effective in showing that certain kinds of machines - even machines that pass the Turing Test - are not necessarily intelligent and do not necessarily "understand" the words that they speak. This is because a computer sitting on a desk with no sensory apparatus and no means of causally interacting with objects in the world will be incapable of understanding a language. Such a machine might be capable of manipulating linguistic symbols, even to the point of producing output that will fool human speakers and thus pass the Turing Test. However, the words produced by such a machine would lack one crucial ingredient: The words would fail to express any meaningful content and thus would fail to be "about" anything.

What's the point?
It doesn't matter how perfectly a computer is designed to simulate the intelligence of a human being - because its behaviour is a result of aimlessly executing instructions, not understanding. In this case, the means defines the end. You're reading this sentence, and understanding it without demonstrating behaviour of any kind. A system's behaviour doesn’t indicate intelligence or understanding, and a system that behaves intelligently is not necessarily ''intelligent.''
______________________________________________________________________________________________________________________
Before we work on artificial intelligence why don't we do something about natural stupidity

No comments:

Post a Comment