The Physical Symbol System

The concept of a physical symbol system (PSS) it at the heart at what is known as "strong AI". The PSS concept annoys many philosophers (such as Searle) and many humanists who find the concept degrading to their sense of what it is to be human. The physical system hypothesis was first proposed by Herbert Simon and Allen Newell in a famous paper called "Computer Science as Empirical Enquiry", published in the early 1970s.

The Physical Symbol System Hypothesis.

In the words of Simon,

The hypothesis [is] that intelligence is the work of symbol systems. Stated a little more formally, the hypothesis is that a physical symbol system ... has the necessary and sufficient means for general intelligent action.

Herbert Simon, The Sciences of the Artificial, 3rd Edition, P. 23.

Simon's sees this hypothesis as empirical, that is, subject to scientific testing. The hypothesis can be shown to be sufficient if intelligent symbolic computer programs can be constructed, and it can be seen as sufficient if psychological studies show that humans think by manipulating symbols. (As they appear to do when solving puzzles, for example.)

The Physical Symboo System is precisely the sort of thing that Searle sought to oppose with his Chinese Room. For Searle, physicial symbol systems, no matter how good they are at manipulating symbols (such as Deep Blue at chess), never understand anything, never think in the human sense.

What is a Physical Symbol System?

Here is Simon's own description.

A physical symbol system holds a set of entities, called symbols. These are physical patterns (e.g., chalk marks on a blackboard) that can occur as components of symbol structures. In the case of computers, a symbol system also possesses a number of simple processes that operate upon symbol structures - processes that create, modify, copy and destroy symbols. A physical symbol system is a machine that, as it moves through time, produces an evolving collection of symbol structures. Symbol structures can, and commonly do, sever as internal representations (e.g., "mental images") of the environment to which the symbol system is seeking to adapt. They allow it to model that environment with greater or less veridicality and in greater or less detail, and consequently to reason about it. Of course, for this capability to be of any use to the symbol system, it must have windows on the world and hands, too. It must have means for acquiring information from the external environment that can be encoded into internal symbols, as well as means for producing symbols that initiate action upon the environment. Thus it must use symbols to designate objects and relations and actions in the world external to the system,

Symbols may also designate processes that the symbol system can interpret and execute. Hence the program that governs the behaviour of a symbol system can be stored, along with other symbol structures, in the system's own memory, and executed when activated.

Symbol systems are called "physical" to remind the reader that they exist in real-world devices, fabricated of glass and metal (computers) or flesh and blood (brains). In the past we have been more accustomed to thinking of symbol systems of mathematics and logic as abstract and disembodied, leaving out of account the paper and pencil and human minds that were required to actually bring them to life. Computers have transported symbols systems from the platonic heaven of ideas to the empirical world of actual processes carried out by machines or brains, or by the two of them working together.

Herbert Simon, The Sciences of the Artificial, 3rd Edition, p. 22-23.

Why is a PSS offensive to some?

Many folks find it difficult to accept that the human mind is merely a computer essentially equivalent to any other computer. The idea that all thoughts are just the result of symbol manipulation seems demeaning to many.

The PSS model is the core of "strong" AI and as such is the target of many attacks, for example, that by John Searle which you saw in connection with the Turing test.


CYC - encompasing human knowledge

The major practical obstacle facing attempts to create machines with human like intelligence is their lack of knowledge of the human world. Machines have not grown up as humas have. It is estimated that while becoming an adult, a human aborbs about 50, 000 'chunks' of knowledge, over a period of close to 20 years. This knowldege provides a context for intelligence, the material for inelligence to work on.

Since the late 1980s, a research projecec called CYC (short for Cyclops, the one eyed giant of Homer's Odyssey). The project attempts to create a knowledge base of a considerable part of human knowledge in a form to reason with.

To explore the CYC project click here.

 Essay by Douglas Lenat