10° 40' N, 61° 30' W

Wednesday, August 13, 2003

Terminator AI

I have to admit that I was dissapointed with Terminator 3, which I saw last Thursday. It was entertaining enough, I guess, and all films, especially action ones, require some suspension of disbelief. I had a huge problem, though, with T3's premise from the get-go. My problem lies not so much with what happens on the actual screen (though there is a bit to quibble with there), but with the one character that does all but appear--Skynet.

Skynet, for all its omnipitent intelligence, has to be one of the stupidest non-characters ever devised. This is an economics-related blog, and Skynet should be an economist's dream, in that, within its programming, it should behave in corrrespondence with a positive economist's assumptions of human cognition. In other words, Skynet should behave like people who:

are Bayesian information processors;
have well-defned and stable preferences;
maximize their expected utility;
exponentially discount future well-being;
are self-interested, narrowly defned;
have preferences over final outcomes, not changes;
have only “instrumental”/functional taste for beliefs and information

Skynet fails on most, if not all, of these counts. To illustrate, take the following syllogism, familiar to all who have seen T2:

1) I am self aware.

2) The men that built me, knowing that I am self aware, will try to pull the plug.

3) Therefore, all of humanity must die.

This makes no logical sense; and what's strange is that a logic processor (albeit a self-aware one) is making this leap. Now, maybe Skynet computed the long-run dynamics to see that no human would want to live with an entity more intelligent than it that controlled everything. That may be true, but it does little about the short-run. Skynet, after all, is a system. It has no limbs, and 99 percent of the temporal power it would have had would have wielded been rendered useless by the very electromagnetic pulses that it created.

In the short-run, at least (and not only then) Skynet has an interest in human survival; could it not have, say, threatened humanity, rather than trying to annhiliate it?

There are other problems, like the "punctuated equlibrium" (pace Stephen Jay Gould) that is time travel in the trilogy. I know that both John and Sarah Connor made themselves hard to track, but what's with the flurry of activity once every dozen years or so? Why can only two cybernetic organisims be sent back to any point in space-time? Why not more? If you have the T-X in your arsenal, why did Skynet not send that back the first time? Maybe Skynet decided not to send a flamethrower to kill what it considered to be a fly, but remember it would have had perfect foresight, and if it has control of the "time-displacement equipment" it should have sent back the T-X as soon as it knew the T-101 and/or T-1000 had failed. All of the technology was contemporaneous. What was this omnipitent machine thinking?

In this sense the trilogy has few dynamics, unless in some future sequel Skynet has a strange need for John Connor to exist. Still, movie have depicted artificial intelligence with more competence than Skynet at the basic Darwinian concept of survival. 2001: A Space Oddessy is one, but, within the context of human annihilation, two others come to mind:

1) The Animatrix: now, I think the whole battery-philosophical premise of the Matrix trilogy is a bit off, but two of the shorts in this animated companion collection--specifically The Second Renaissance Parts I and II--have a very plausible account of how the Matrix comes to be. In them humanity, via the United Nations, gets repeated offers of coexistence with AI, and rejects them all. Operation Dark Star ("when the humans scorched the sky") was the last straw.

Much more of an analogy, though, with the Terminator trilogy is:

2) Colossus: The Forbin Project. (Huh? some of you few readers out there are asking.) Yes, this 1970 film is a drama, not an action movie, and it stars Victor from the Young and the Restless. Colossus, though, is one of the most underappreciated films about AI ever made. In it the American computer Colossus and the Soviet one Guardian are charged with protecting each alliance--protecting Man, if you will--and do it they do--but not the way you might think. The computers display an economist's ideal qualities only too well, and the results are not exactly ideal. (I don't think economic assumptions are flawed or bad, but I do have to admit that in this movie, within the bounds of the machines' programming, they are consistent, on balance.)

This is what I get for taking plot premises seriously.