Games as Conversations With God

In one of Giant Bomb’s infamous live E3 podcasts, David Jaffe (the foul-mouthed director of Twisted Metal and God of War) described the plight of videogame narrative in an interesting way. It goes like this: The easiest movie to make is about people sitting in a room talking, while the hardest might involve spaceships and a bunch of explosions, right? When it comes to games, though, the easiest thing to make includes spaceships and explosions whereas simulating a bunch of people sitting in a room talking turns out to be incredibly hard. As such, many game designers (including Jaffe himself on God of War) choose not to simulate these conversations at all, instead writing and recording them as cutscenes to be inserted between explosive spaceship battles. Jaffe, in recent years, has become tired of that, and has done a few interviews (as well as one notable DICE talk) encouraging designers to explore the more procedural experiences at which games specifically tend to excel.

So, let’s unpack this a little bit. Why is it that games are so bad at simulating conversations between humans? Well, mostly it’s because of the dirty little secret living between the walls of the information revolution: Computers suck at almost everything. Unless your problem involves ‘doing arithmetic very fast’,  it’s going to be rather difficult to convince your microprocessor to help you out with it. (Indeed, the entire field of  computer science is essentially concerned with transforming various complicated problems into the smallest possible amount of arithmetic.) Computers are not naturally good at reading or writing in our languages, at emulating our behaviours, mannerisms and decision-making processes, or even at rendering images of us that don’t look like horrifying robot marionettes. They do not think, speak or act like us. They don’t even know what we are or that we exist by most definitions of the verb ‘to know’. We programmers do not speak to computers on our own terms, like people do in Apple commercials or on the Holodeck in so many episodes of Star Trek. Instead we do so strictly on the computers’ terms, primarily by reading/writing numerical values and doing simple math. The languages with which we instruct them grow increasingly elaborate as we climb the ladder from assembly to C to Java and onward, but they haven’t actually become more ‘human’. Object-oriented programming, for instance, is a useful design strategy, but it bears no real resemblance to English or Mandarin or Latin and, in fact, something like 80% of our population can’t seem to understand or apply the principles of OOP very successfully (or, for that matter, almost any other programming principle) .

When futurists talk about how there is going to be a ‘technological singularity’ in which computers develop self-awareness and in several seconds team up to calculate the meaning of everything and enslave/destroy us all, I find myself skeptical. Human consciousness, far from being the default way of existing, is actually this weird thing that resulted from an obscure evolutionary process on a large rock within a universe of matter, energy, light, gravity, magnetism, weird sub-molecular nonsense and so forth. At some point there became organisms with genetic structures and, through natural selection, they eventually managed to evolve into these funny looking bipedal critters with these weird ropy actuators that wrap around a hard endoskeleton and are operated via electrical impulses from this gray lumpy thing. For fairly arbitrary reasons, we happen to obsess over survival and, curiously enough, reproduction. Software, by contrast, is not anything like us. It exists in a universe of numbers, patterns and instantaneous transformation, all of it having been designed from scratch by humans for a specific purpose (this is why it’s always way worse at its critical functions than practically any biological organism you could name). If we did manage to build an AI capable of making twenty million increasingly-powerful copies of itself in an instant, what makes us think it would choose to? Reproduction is a biological thing. If our AI could get online to check Wikipedia and thereby absorb the sum of all human knowledge in 2.3 seconds, why would it want to? We humans are naturally curious, but I assure you my installation of Microsoft Excel is not. An AI may not mind dying; it may not consider the constructs ‘life’ and ‘death’ applicable to itself. It may not even recognize the concept of having a ‘self’ or of there being ‘other people’. It may regard our solar system as very similar to a brain, and the nuanced little movements of our planets and space junk as essentially the same thing as the complete works of Shakespeare. There’s a good chance it won’t find any of this stuff particularly ‘interesting’ in the way we understand the word. (Now, that pixel noise pattern in the top half of that webcam feed? Y’know, the one with all those weird face-looking blobs moving around in the bottom of the frame? There is something worth studying!) Our universe sometimes yields humans, but that’s because our universe is weird. Digital environments, being cleanlier and featuring less quantum entanglement, are poorly suited to our kind.

The question, then, is how to use these computer things to produce works of art that are relevant to the human experience in all its diversity. Now, perhaps one day computer scientists will squeeze enough human cognition into some set of O(n log n) algorithms that we can indeed all become addicted to the adult-themed Holodeck programs we so desperately crave. Yet I personally shall not hold my breath. Should you be a medium-to-large scale game developer you might try hiring some writers, artists and animators to hand-craft everything that your video game humans will look like and do, which can yield some interesting results. But what if, like Jaffe and many others, you simply don’t want to make a game full of cutscenes, dialog trees and other such forms of inelegance (or you happen to be dirt poor)?

Read more…