Computer Power and Human Reason

January 27, 2021

Computer Power and Human Reason

by Joseph Weizenbaum

Joseph Weizenbaum is the author of ELIZA, the simple AI program that asks questions like a Rogerian therapist (“Tell me more about your mother…”). This book is meant to rebut the idea that humans essentially ARE computers (“computers made of meat,” as Marvin Minsky once put it).

I’ve read other books that touch on this topic in some way – Technopoly and The Glass Cage come to mind. This one gets the closest to identifying the real problems with the popular view. They are squirrelly and hard to pin down but real. There is a vast force in society, especially in big tech, pushing technical progress as an unalloyed good. Questions about what computers should do don’t compute. And I think I finally realized why these people are so hard to argue with.

Take the example from The Glass Cage (by Nicholas Carr) of a tribe of Inuits who did their own wayfinding for hundreds of years. They knew their country. They knew how to read the land, the stars, and figure out where they were. They could make their way anywhere. Then they got GPS. Now, they can still make their way anywhere, but within one generation they lost the wayfinding ability. They must now rely on the computer to get them where they want to go.

It seems like they have lost something. But the GPS is better at wayfinding than any human (in a sense), so why is this really bad? To answer this, we obviously want to construct a logical argument. This is where the case gets slippery. Is GPS dangerous in some way? Too expensive? Are there important things that a human can do that GPS can’t? Not really. I think the explanation is something intuitive. It’s obvious to me that the loss of their wayfinding ability is sad, but if you don’t get it, I don’t know how to explain it. I might be able to explain it to a person, possibly by analogy or with a well-chosen metaphor. But I couldn’t explain it logically, to a computer.

Computers can’t intuit. If you think of intuition as a way to make a leap to a correct conclusion faster than logic, fine. But you may think that, given time, you can get to the same conclusion logically. I’m beginning to wonder if that’s true. If intuition can be both correct and illogical, then it represents something humans have that computers can’t. Weizenbaum asks if there are things we know that we can’t tell to a computer. Things we know intuitively fall under that category.

Technical people tend to be more left-brained and logical, and they may discount the right-brained, intuitive thinking, but it is just as much as part of humanity.

Computers are good at logic, at left-brain thinking. But what if, as we embrace technology, the things we lose are right-brained – intuition, gut feeling, human connection? By virtue of their right-brainedness, they are not expressible in clear logic. If we have abandoned metaphor and intuitive leaps in sharing our thoughts, we begin to lose the ability to have thoughts that are by nature inexpressible! All we are able to say is, “Something is lost.”

Cue Elon Musk bemoaning the messiness of verbal communication on the Joe Rogan podcast (May 2020). If only, he says, we could translate human thoughts into a universal computer language. (This, in the context of his Neurolink company.)

But these right-brained thoughts are exactly the ones that CANNOT be expressed in code. It is not a problem of discovering ways to encode them but of recognizing that they exist, are valid, and cannot be shared with a computer.

Chapter 1

Man has been seduced into working for an age of rationality, but fooled into thinking this is the same as logicality (13). Introducing computers into complex human activities may constitute an irreversible commitment. (28) By becoming indispensable, the computer solidifies social and political infrastructure that might otherwise have been forced to evolve (31). The computer introduces new ways to live while simultaneously making it nearly impossible to live in the old ways (37). Try getting along without a cell phone or the internet.

Chapter 2

Explains Turing Machines, formal languages. Electronic machines work with information, not power.

Chapter 3

Explains clocks, logic gates, registers, subroutines, assembly language. Shows the power of abstractions. The details of a language will strongly influence how a programmer thinks about the computer. (102)

Chapter 4: The compulsive programmer

Wherever computer centers have become established, ... bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be riveted as a gambler’s on the rolling dice. When not so transfixed, they often sit at tables strewn with computer printouts over which they pore like possessed students of a cabbalistic text. They work until they nearly drop, twenty, thirty hours at a time. Their food, if they arrange it, is brought to them: coffee, Cokes, sandwiches. If possible, they sleep on cots near the printouts. Their rumpled clothes, their unwashed and unshaven faces, and their uncombed hair all testify that they are oblivious to their bodies and to the world in which they move. These are computer bums, compulsive programmers. (116)
The ordinary professional programmer addresses himself to the problem to be solved, whereas the compulsive programmer sees the problem mainly as an opportunity to interact with the computer. (116)

Power corrupts. A programmer has omnipotence over his programs; he is corrupted by the compulsion to program. He neglects his own body (disheveled, no shower, poor nutrition) and the real world, obsessively concentrating on the world in which he is all-powerful.

Quoting Dostoevski (122), he compares programming addiction to gambling. The gambler’s systems of superstitions are like the tricks a programmer tries (without understanding) to get something working. Clear the cache, restart the machine, etc.

The compulsive programmer is only interested in computing, so he does not look for answers outside of the computer. I think this may be a core issue. Some problems are not best solved by a computer, but it’s (beyond) fashionable to look for a technological solution to nearly anything, so attempts are always made. And once they are made, we quickly end up with the “irreversible commitment” of chapter 1.

For example (128): Herb Simon claims that man, though complex internally, displays external behavior as reliably as an ant. His evidence comes from giving people puzzles, having them memorize things, process images, etc. All things a computer does well! He does not look at things like grief or profound religious experience. This is safe – there’s no way his hypothesis will be falsified.

It’s like the old joke (127): A drunk is looking for his keys next to a street light. He tells someone he lost them “over there” and gestures off into the darkness. “So why are you looking for them over here?” “Because the light is so much better here!” “Almost the entire enterprise that is modern science and technology is afflicted with the drunkard’s search syndrome…” (130)

Chapter 5

[Side note: I read this book in January, and now it is December. I wrote the above notes earlier, and am only now finishing up. So this may get slipshod.]

A theory is like a map (142). It guides, can suggest new areas of exploration. A street map and a demographic map can both be “correct,” but each suggests how it should be used. A theory “prepares the conceptual categories within which to ask questions and design experiments.”

A model satisfies a theory. But when a theory is expressed in the language of a computer program, the running program IS a model of the theory.

A model captures what is essential to understanding some aspect of reality. You might model how a rock falls without modeling how big of a crater it will make in the ground, for example. Choosing what is essential is an act of human judgement.

A good example from page 151: You can model an object falling as d = a*t^2/2. On the moon, a = moon’s gravitational constant (which is essential). But in practice, the moon’s gravity seems to vary, possibly due to internal concentrations of mass. So, update the model: a is now a subroutine, not a constant. And the modeler can easily end up with a subroutine that fits all observations but is based on a faulty theory. This might be complex… which can make it seem more credible! “A program’s performance, therefore, does not alone constitute an adequate validation of its theory.” (152)

Theories in the social sciences can be empty, shallow, “mainly consisting of insightful remarks and heuristic slogans.” (153)

Chapter 6

Metaphors are useful because they encourage transferring insights from one context to another. But they can also lead to popular gross oversimplification. E.g., “man is a computer.” This implies we only really understand something when we can write a program that imitates it perfectly. Men seek answers from the computer alone.

AI research has three modes:

  • Simulation mode: imitate nature
  • Performance mode: achieve the results of nature, but by other means
  • Theory mode: purely abstract

Simulation mode won’t deliver the fastest, most efficient results, but it may lead to insight about humans. Most AI researchers are concerned more with performance. Newell and Simon: a program and a human are both “information processing systems” (169). They developed a General Problem Solver (GPS), but the “man as computer” metaphor means they consider a psychiatric patient to be a “problem” for the GPS to try to solve.

Chapter 7

AI programs are analogous to machines build by 17th century artisans. (196) The fact that you can build a working clock or cannon does not imply that you have a general theoretical understanding of the universe. ELIZA does not put you on track to comprehend the human mind. “Even the most clever clock builders of the 17th century would never have discovered Newton’s Laws… by building fancier clocks.” (197)

Two questions: 1) Can all linguistic understanding be formalized? 2) Are there ideas that no machine will ever understand? If (1) is NO, then (2) is YES.

If you have a theory that (1) is YES, but you only allow computer-represented concepts as legitimate, then you have circular reasoning! (200)

Chapter 8

If you build a machine that learns to understand language like a baby does, you still have no theory of language understanding. (203)

IQ treats intelligence as independent of creativity, intuition, wisdom, etc. Intelligence is a meaningless concept apart from context. Some knowledge is kinesthetic. Some things humans know by virtue of having a human body. Seems like having a human body and being raised by humans is the main difference between us and robots? (210)

Left brain -> logic, language could be symbol manipulation similar to a computer. But right brain -> metaphor, intuitive leaps, unexpected connections.

The part of the mind that communicates rationally also observes and mediates for our unconscious mind. (222)

An intelligence which intuits truths that cannot be proven logically is “beyond the abilities of a computer to simulate.”

If you are raised in Japanese society, you will know the many unwritten rules of that society. But from the outside, you cannot know them. Similarly, a computer can’t know all that is unwritten about being human.

It feels like he finally makes his argument in this chapter. But he gives away too much (e.g., on page 213 he seems to say that maybe hundreds of years from now, these computing problems will be solved. If so, it seems like his whole argument that a computer CAN’T know what it is to be human falls apart).

Chapter 9

As programs grow, they can become so complex that no one fully understands their behavior. They become “immune to change” (236). People rely on them and can’t risk breaking them.

Historical data not in a “standard form” is ignored because of the difficulty involved in reading it into a computer. Whole swaths of data are jettisoned from scientific study for this reason.

The technological messiahs find it impossible to trust the human mind – B.F. Skinner refers to “the disastrous results of common sense” (245) – and are compelled to build “trustworthy” computers.

This book helped me reach a thought that was difficult to arrive at. I will try to state it here:

Sometimes it is clear there is a problem, but it falls apart when you try to articulate it. But if the thing computers will never understand is the intuitive/metaphorical side of human thinking, perhaps it makes sense that the argument can’t be laid out in plain language. Plain enough language could be turned into code, and the point here is that some aspects of human thought can’t be encoded. (This helps explain why this thought was hard to arrive at, also.)

The rhetoric of the Technopoly urges “instrumental reasoning,” not authentic human rationality. (253)

Chapter 10

Scientists prattle on about “knowledge for its own sake” but detach science and knowledge from any contact with the real world. (265) Scientific knowledge is validated by the display of technological wonders. Nature – man included – is simply fodder for molding and manipulating. Thus even man (well, perhaps not ME, but certainly YOU) is just an object for science to be done to. Was science made for man, or man for science? Science gives power to men but, by making them objects, removes any justification for guiding science by authentic human standards, ethics, etc. Dehumanization of man. Sidesteps giving real meaning to human language.

Scientists objected to the Vietnam war on moral grounds but felt they would lost all credibility if they said so. They then presented an alternative to bombing: a fence of electronic barriers and explosives called The McNamara Line. This was implemented. Bombs to blow the feet off of porters crossing the line. When you restrict yourself to technical questions, you will only get technical solutions. This “intellectual self-mutilation” contributes to the feeling of powerlessness, of being railroaded along.

Conclusion: Unlike machines, man benefits from examining his inner life, something that cannot be achieved by “instrumental reasoning” alone. Strive to do this, to be a whole person, to model such self-work for others. Resist the zeitgeist (even in 1976 this was the zeitgeist! How much more now!) to sideline moral objections in favor of technical ones. Have courage.