The Cult of the Peacock

It’s easy to forget that at one time all videogames had manuals. I used to like reading manuals. Manuals were cool. Now, instead of manuals, we have interactive tutorials. They take about fifty times longer to produce, three times longer to consume, and players hate them so much that their highest aspiration is to become completely transparent. Currently I spend most of my waking hours developing them. It should come as no surprise that I hate them too.

This is a story about how these things happened. It’s sort of a companion piece to the article I wrote about Liz Ryerson’s Problem Attic in that it examines the reasons why games like that became unfashionable, how this is a bad thing, and what we might do to fix it. It’s a story about the history of interaction design both in academia and the games industry, as well as my experiences travelling through those spaces. It’s a story about how I got the kink in my neck, and the slow death of the videogame manual. It begins with a teapot and ends with a peacock. More than anything, though, it is about apotheosis. There are four parts. Shall we begin?

Read more…

Fashion, Emptiness and Problem Attic

Problem Attic is a game by Liz Ryerson that you can read about and play (for free!) on her website.

problem attic

Any designed work can be decomposed into two different kinds of features: Intrinsic features and extrinsic features. An intrinsic feature is something we judge to be a non-reducible atom of actual value that the audience wants and the work provides—that is, the work’s purpose—while an extrinsic feature is anything that exists solely to realize that purpose, providing no actual value in itself. To design something we must first decide which intrinsic features we hope to provide and then do so as efficiently as possible by devising and iterating on a set of extrinsic ones. Here is a quick example: The intrinsic features of a hammer include ‘delivering impacts to objects’ and ‘inserting/removing nails from rigid structures’, and to best realize these we might iterate on any number of extrinsic ones (such as the hammer’s shape, the materials of which it is composed, or its manufacturing cost).

A product, which is a special kind of designed work, has at least two intrinsic features. One is to perform the task for which it was made; the other is to convince you to buy it. (The next time you hear the phrase ‘ludonarrative dissonance’ ask yourself whether the dissonance you’re discussing might actually stand between ‘what marketing decided would generate money’ and ‘what the designers defiantly attempted to produce’.)

Iterations on a design’s intrinsic features are transformative; they change what the work is on a fundamental level by changing what it does. (A hammer that cannot deliver impacts to objects is no longer, ontologically speaking, a hammer; it has become something else.) Iterations on a design’s extrinsic features are merely ameliorative; they make it better at fulfilling its purpose without changing its nature. Thus we only value extrinsic features insofar as they improve a design’s ability to give us the things we actually want, and we are quite content to discard them as soon as we find more effective ones. Walmart would stop selling hammers if they could figure out how to to market telekinesis. Google would stop making iPhone apps if they could perfect the horrifying spider drones that burrow into your brain through your nasal cavity and telepathically communicate bus directions to you.

The intrinsic features of Art media like literature or film, unlike those of hammers and map APIs, are not easily reducible into language. Whereas to design a hammer involves finding ways of realizing features whose value is readily apparent, to make Art is to search for value lying beyond the edges of our understanding: To capture something we know is important to us even though we cannot quite say why. This is what makes ‘Art’ so famously difficult to define, and why we speak not about ‘novel designers’ or ‘film designers’ but about  the authors of these works. Authors are a specialized type of designer who work to realize feelings, concepts or moments; often they attempt to connect in some fashion with our shared humanity. We cannot fully express what their work is for because its value transcends understanding. Thus while conventional design undoubtedly remains useful as a means of iterating towards our authorial objectives (the language by which we communicate mood during a film, for example, is the product of very sophisticated design work) it tells us nothing about what our authorial objectives should actually be nor what our Art becomes when we realize them.

Videogames inherit a little from Art but mostly from product design, which has been kind of a problem for us. As an industry we put faith in the idea that there is intrinsic value in the games we develop, although we don’t think very expansively about what that could be; instead we abstract it, using ugly words like “content” as placeholders for value without ever proving that it truly exists. We then set about designing incredible machines that shuttle players towards these placeholders with extremely high efficiency, which as designers is really what we’re good at. We make the interface as usable as we can because players need it in order to learn the rules. We teach the rules very carefully because players need them in order to grok the dynamics. We shape our dynamics strategically because enacting them is what will stimulate players to feel the aesthetics. Somewhere at the core of all this, we suppose, lives the “content” players are attempting to access: That which we have abstracted away so that we could hurry towards doing safe, understandable product design rather than risky, unfathomable Art. In game design we enjoy paying lip service to aesthetics. What, then, shall we say are the aesthetics that we can package up into 5–60 hours of intrinsic value? Challenge? Agency? Story? Fun? Is ‘Mario’ an aesthetic? How do we stimulate the Mario part of the brain? Oh hey, wait, look over there! Someone is confused about what that UI indicator represents! TO THE DONALD NORMAN MOBILE.

The more time I spend examining my professional work and that of my peers in the games industry, the more I come to believe our near-sightedness has crippled us. We have avoided building sophisticated pleasures that demand and reward the player’s investment, preferring instead to construct concentric layers of impeccably-designed sound and fury over an empty foundation of which we are afraid and at which we can hardly bear to look. We gamify our games, and then we gamify the gamifications, so that many different channels of information can remain open all at once, distracting the player by scattering her attention across a thousand extrinsic reward systems that are, in themselves, of no value whatsoever. We delay the realization that our true goal is not to deliver some fragment of intrinsically valuable “content” rumoured to reside, like a mythical unicorn, in the furthest reaches of our product; our true goal is instead to find something, anything to mete out over the course of 5–60 hours that will somehow account for the absence of real instrinsic value. It is not, therefore, the content that truly matters to us; it is the meting out.

Though the products we design ought to provide value for players and money for us, they currently only pretend towards the former function while actually performing solely the latter one. This deception permits us to continue making intrinsically simple products, avoiding transformative changes to our designs that we fear would render them less digestible; we instead rely upon a pattern of amelioration by technical advancement wherein we deliver as few intrinsic features as possible (and the same ones over and over again), but with intricate fashions heaped upon them. We have abandoned game literacy in the process, and as a result we now find ourselves trapped in the business of making increasingly-elaborate pop-up books.

Read more…

Games as Histories


The internet has trouble understanding optical phenomena

Why is it that so many of my friends who count Diablo 2 amongst their all-time favourite videogames find themselves so disappointed with its sequel?

It is tempting to point into the horrific maelstrom that is public opinion on the internet and claim that Diablo 3 suffers predominantly from an overabundance of rainbows; now, while I do believe said rainbows represent a tangible detail to which some have pointed in an effort to articulate legitimate concerns with the tone of the game world, that is only one piece of the story. The remainder involves the game’s notorious Auction Houses, which are far more interesting to discuss because they reveal something surprising about loot and its design. Diablo 3 commits the cardinal sin of Game Design Idolatry: It is so fixated on the whirring and buzzing of its item generating machine that it loses sight of the aesthetics for which that system was originally designed, and which made Diablo 2 so memorable.

Read more…

Narrative Economy Makes Walking Dead Work

In drama there is a principle known as “Chekhov’s gun”. It goes like this: If, in act one of a play, you place a loaded gun prominently in the middle of the stage so that it becomes notable to the audience, it behooves you to fire the thing before the curtain falls. If you don’t, it means you’ve wasted the audience’s time and attention on a meaningless detail when these precious resources could instead have been spent on something that contributes more earnestly to the quality of the story. The concept is most frequently applied to storytelling, but in fact it’s applicable to all forms of design and can be restated in a way familiar to all creators: Strive to make every feature of your product as purposeful as it can possibly be. Good design is economical; it maximizes utility while minimizing waste.

Previous videogames aiming to provide the player with interesting narrative choices suffer from a lack of economy, and this is partly why we find game stories to be so inferior to those of film or novels. Consider, for example, the time-honoured trope of wheeling two characters the player has never seen before out onto the stage, explaining some conflict in which they’ve mired themselves, then asking the player to decide everyone’s fate. Often the choice allows the player to make some clearly defined moral stance (the proverbial ‘baby save/puppy kick’); occasionally it involves thought-provoking ethical judgments (‘gray area’). Ideally the player asks herself: What is the best way to handle this situation? Pragmatically, however, I can think of a few more pressing concerns: Who the hell are these people, why should I care about them, why should I care what happens to them, and how does any of this affect me?

Read more…

Violence in Games Has Gotten Weird

One afternoon in university a classmate asked me what I wanted to do after graduating. I immediately said “Video games!”, and she was surprised; she thought I’d go off to make weird Arduino-based projects for the Surrey Art Gallery or something. She didn’t think me the type to play or want to make video games. Eying me suspiciously she asked: “Aren’t those really violent?”

I was expecting this; I was, in fact, prepared for it in the way only an obnoxious know-it-all like myself can be prepared. “Well actually…” I began, eager to waste the next two minutes of her life ranting about my favourite subject. “The medium’s fixation on violent conflict is an unfortunate artifact of early design constraints. Just underneath the blood spatter are interesting spatial and temporal problems that constitute the actual game, and designers dress those things up with violence only because the precedent has become deeply ingrained and difficult to erase.”

Read more…

When Not to Turn It to 11

“The numbers all go to eleven. Look, right across the board, eleven, eleven, eleven and…”

“Oh, I see. And most amps go up to ten?”


“Does that mean it’s louder? Is it any louder?”

“Well, it’s one louder, isn’t it? It’s not ten. You see, most blokes, you know, will be playing at ten. You’re on ten here, all the way up, all the way up, all the way up, you’re on ten on your guitar. Where can you go from there? Where?”

“I don’t know.”

“Nowhere. Exactly. What we do is, if we need that extra push over the cliff, you know what we do?”

“Put it up to eleven.”

“Eleven. Exactly. One louder.”

“Why don’t you just make ten louder and make ten be the top number and make that a little louder?”

“[pause] These go to eleven.”

This Is Spinal Tap

As a civilization we are preoccupied with going to 11. We first like to rank things (dining experiences; movies; potential sex partners) using some set of real numbers like 1-10 (inclusive); we then seek cases where the object in question is so illustrious as to exceed the arbitrary scale we just invented, perhaps due to outstanding salt prawns or because she hails from ‘Elevenessee’. Our species invented the ‘five-alarm chilli’, then the six-alarm chilli, and then, conceived in the darkest reaches of Flavour Mordor, the insidious eight-alarm chilli. (How much spicier is it? Three alarms.)

On a scale of 1 to 10, where 1 is ‘not doing it’ and 10 is ‘just doing it’ I suspect Nike is currently holding somewhere around 12 or 13, which constitutes a year-over-year growth of 1.2 just do its.

I once read an article about how id Software’s publisher wanted them to fill their games with as many ’11 moments’ as possible. Always be turning it up to 11! More awesomewatts per second! Crank that funometer all the way up, and then crank it even more up! One gets the impression that as game designers we want the minds of our players to be exploding violently on some kind of perpetual basis.

Poster for Daikatana reading "John Romero's about to make you his bitch."

John Romero turns it up to 11

The joke, on a structural level, is that this line of thinking misapplies the concept of upper and lower-bounded scales to a problem better suited to the more general ideas of greater and lesser. A guitar amplifier has a minimum gain (off) and a maximum gain (the loudest it will let you go), which is to say it has a range, yet the members of Spinal Tap have identified a critical deficiency in this device’s design: Over time, our ears become numb to the intensity of anything played at the same consistent volume. When something first gets loud it seems pretty loud, but if we listen to it for three minutes straight it will at some point stop seeming ‘loud’ and start seeming ‘the same’. To be musical, therefore, it is necessary to let go of the absolute measure of volume and start dealing instead in ‘getting louder’ and ‘getting quieter’, employing something performers call dynamics (which is another word for ‘change’). This is why we characterize the passages of a song not by their decibel rating (our sensory apparatus is actually totally incapable of measuring that reliably anyway), but instead by when the song gets louder, when it gets quieter, and how much its volume changes relative to how loud it used to be.

Read more…

Games as Conversations With God

In one of Giant Bomb’s infamous live E3 podcasts, David Jaffe (the foul-mouthed director of Twisted Metal and God of War) described the plight of videogame narrative in an interesting way. It goes like this: The easiest movie to make is about people sitting in a room talking, while the hardest might involve spaceships and a bunch of explosions, right? When it comes to games, though, the easiest thing to make includes spaceships and explosions whereas simulating a bunch of people sitting in a room talking turns out to be incredibly hard. As such, many game designers (including Jaffe himself on God of War) choose not to simulate these conversations at all, instead writing and recording them as cutscenes to be inserted between explosive spaceship battles. Jaffe, in recent years, has become tired of that, and has done a few interviews (as well as one notable DICE talk) encouraging designers to explore the more procedural experiences at which games specifically tend to excel.

So, let’s unpack this a little bit. Why is it that games are so bad at simulating conversations between humans? Well, mostly it’s because of the dirty little secret living between the walls of the information revolution: Computers suck at almost everything. Unless your problem involves ‘doing arithmetic very fast’,  it’s going to be rather difficult to convince your microprocessor to help you out with it. (Indeed, the entire field of  computer science is essentially concerned with transforming various complicated problems into the smallest possible amount of arithmetic.) Computers are not naturally good at reading or writing in our languages, at emulating our behaviours, mannerisms and decision-making processes, or even at rendering images of us that don’t look like horrifying robot marionettes. They do not think, speak or act like us. They don’t even know what we are or that we exist by most definitions of the verb ‘to know’. We programmers do not speak to computers on our own terms, like people do in Apple commercials or on the Holodeck in so many episodes of Star Trek. Instead we do so strictly on the computers’ terms, primarily by reading/writing numerical values and doing simple math. The languages with which we instruct them grow increasingly elaborate as we climb the ladder from assembly to C to Java and onward, but they haven’t actually become more ‘human’. Object-oriented programming, for instance, is a useful design strategy, but it bears no real resemblance to English or Mandarin or Latin and, in fact, something like 80% of our population can’t seem to understand or apply the principles of OOP very successfully (or, for that matter, almost any other programming principle) .

When futurists talk about how there is going to be a ‘technological singularity’ in which computers develop self-awareness and in several seconds team up to calculate the meaning of everything and enslave/destroy us all, I find myself skeptical. Human consciousness, far from being the default way of existing, is actually this weird thing that resulted from an obscure evolutionary process on a large rock within a universe of matter, energy, light, gravity, magnetism, weird sub-molecular nonsense and so forth. At some point there became organisms with genetic structures and, through natural selection, they eventually managed to evolve into these funny looking bipedal critters with these weird ropy actuators that wrap around a hard endoskeleton and are operated via electrical impulses from this gray lumpy thing. For fairly arbitrary reasons, we happen to obsess over survival and, curiously enough, reproduction. Software, by contrast, is not anything like us. It exists in a universe of numbers, patterns and instantaneous transformation, all of it having been designed from scratch by humans for a specific purpose (this is why it’s always way worse at its critical functions than practically any biological organism you could name). If we did manage to build an AI capable of making twenty million increasingly-powerful copies of itself in an instant, what makes us think it would choose to? Reproduction is a biological thing. If our AI could get online to check Wikipedia and thereby absorb the sum of all human knowledge in 2.3 seconds, why would it want to? We humans are naturally curious, but I assure you my installation of Microsoft Excel is not. An AI may not mind dying; it may not consider the constructs ‘life’ and ‘death’ applicable to itself. It may not even recognize the concept of having a ‘self’ or of there being ‘other people’. It may regard our solar system as very similar to a brain, and the nuanced little movements of our planets and space junk as essentially the same thing as the complete works of Shakespeare. There’s a good chance it won’t find any of this stuff particularly ‘interesting’ in the way we understand the word. (Now, that pixel noise pattern in the top half of that webcam feed? Y’know, the one with all those weird face-looking blobs moving around in the bottom of the frame? There is something worth studying!) Our universe sometimes yields humans, but that’s because our universe is weird. Digital environments, being cleanlier and featuring less quantum entanglement, are poorly suited to our kind.

The question, then, is how to use these computer things to produce works of art that are relevant to the human experience in all its diversity. Now, perhaps one day computer scientists will squeeze enough human cognition into some set of O(n log n) algorithms that we can indeed all become addicted to the adult-themed Holodeck programs we so desperately crave. Yet I personally shall not hold my breath. Should you be a medium-to-large scale game developer you might try hiring some writers, artists and animators to hand-craft everything that your video game humans will look like and do, which can yield some interesting results. But what if, like Jaffe and many others, you simply don’t want to make a game full of cutscenes, dialog trees and other such forms of inelegance (or you happen to be dirt poor)?

Read more…

The Quest for Good Platformer Mechanics Part 2: Fat Men on Ice Skates

Last week I made the very opposite of a bold assertion in claiming that the NES classic Super Mario Bros is the gold standard in good platformer mechanics. I also mentioned Tim Rogers and the “sticky friction” that so endeared him to the game; today I’d like to elaborate on that second part.

If Metacritic was around back in 1985 I’m sure you could pull up a whole swathe of reviews describing SMB’s controls as ‘intuitive’, ‘tight’ or ‘pixel perfect’, and these words could lead the uninitiated observer to conclude that Mario is some kind of Olympic athlete when in fact this is (barring certain unfortunate circumstances) patently untrue.

Mario & Sonic at The Olympic Games

Pictured: Certain unfortunate circumstances

You see, while I was working on my ePortfolio last August I had cause to go back and check, and it turns out that Mario moves not like a gymnast or an action hero, but rather more like a fat guy on ice skates. It takes him a bit of time to really get moving and even longer to stop. Each jump brings not only the possibility of over or under-shooting your target, but also of landing with too much speed and plummeting forward into a Goomba or a fireball or something. Does this sound like ‘tight, intuitive, pixel perfect platforming’ to you? Well, perhaps it should.

As you may expect from a game as highly-acclaimed as SMB, its primary mechanics are all in there for good reasons. Why does Mario move like a fat guy on ice skates? Is the timer on every level there solely to provide an anachronistic, arcade-ish scorekeeping system? Why, on a controller that has two primary buttons, is one of them dedicated mostly to making your dude ‘run’? (And why, exactly, would you ever want to release that button?) The short answer: Going fast is fun. The long answer has to do with difficulty scaling, the design constraints of platform games in the early 1980s, and malevolent cacti.

Read more…

The Quest for Good Platformer Mechanics

What should a good platformer feel like? If you wanted to you could stop and think about it for a minute, but fortunately you don’t have to because I am here today to argue that it should feel mostly like this:

A screenshot of Super Mario Bros

Pictured: What good platformers should feel like

Super Mario Bros was not the first platform game ever released, but it is probably the most important. It is the bedrock on which the contemporary ‘platformer’ genre is founded; it is the thing that made Shigeru Miyamoto a household name, and the key component in Nintendo’s magical money-printing machine. But enough navel-gazing: What exactly is so special about Super Mario Bros? What makes it work?

You may be familiar with Tim Rogers. He is a game designer and possible crazy person who writes multi-thousand word essays on Kotaku as well as the excellent Action Button Dot Net. And, in one of his Kotaku pieces, he succinctly explains (which is rare for him) the secret to Mario’s success:

If you asked a space alien from the future to play Super Mario Bros., and then play any of the other side-scrolling platform games of that era, and then report back to you with one sentence on what he perceived as the major difference between the two, he would speak gibberish into his auto-translator, and it would output a little piece of ticker-tape with the words “STICKY FRICTION” printed on it. It is the inertia of Mario’s run that endeared him to us. It didn’t have anything to do with brand strength or graphic design. Those things were secondary. It was all about the inertia, the acceleration, the to-a-halt-screeching when you change direction. You can feel the weight of the character. People never put these feelings into words when talking about games, though they really, really are everything.

Friction, for our purposes, is more than a Newtonian coefficient or counter-force; it’s an aesthetic. It’s the way things move relative to one another and how players interpret that movement to construct their understanding of the game world. When Rogers speaks of inertia, he refers to the way virtual pixels on a screen can gain real mass, movement and livelihood through their dynamics (that is, the manner in which they change). Super Mario Bros does not need high-quality sound effects or 32-bit colour to convey itself to us. It does not need a rumble pack or a motion sensor. We grow to understand it with each of Mario’s steps, leaps and falls.

I thought a lot about this as I first started developing the platformer-ish aspect of my portfolio. Frankly, it scared me a little. I wasn’t quite sure how I wanted the movement to work, but I knew that if the friction didn’t feel right the whole thing would seem cheap and flimsy. And so I started coding. I gave myself a few simple parameters to tweak, then a few more complicated ones. I wondered what would happen if the character could jump or climb stairs and ramps; perhaps if my dude felt good to control in a real Mario level it would also feel good running side-to-side through a less elaborate virtual campus? I followed this rabbit hole down into collision detection systems, the Box2D API, and ultimately a rather complicated software solution. I’ve put it here for you to play around with and, if you like, download and use for your own purposes.

Read more…

Problems & Solutions: Skyward Sword

Continuing last week’s theme of ‘game endings that I don’t like’, today I bring you an article about The Legend of Zelda: Skyward Sword. It contains endgame spoilers including the names of principal characters and the mechanical details of one particular gameplay sequence, but I won’t ruin any plot twists; if you’ve played a Zelda game before, nothing in here ought to be news to you. Let me tell you what this article isn’t about. It isn’t about how good and/or cruddy the motion controls are, the standing of Skyward Sword relative to other Zelda games, or whether the franchise is ‘getting old’ or ‘formulaic’. Also, I promise it doesn’t contain a single word about Fi.

Instead, this post has something in common with my thoughts on Mass Effect 3 in that it points out ways in which the story diverges uncomfortably from the game’s mechanics, getting its aesthetic signals all crossed up. There is one segment, right before the final battle, in which the game doesn’t know how it wants the player to feel. The results are frustrating, hilarious, and also fairly instructive.

The Segment in Question

Unsurprisingly, we find Link in dire circumstances. Ghirahim (the bad guy) is preparing a magical spell that will steal Zelda’s soul, both killing her and breathing life back into a powerful demon king whose resurrection will bring about a proverbial “age of darkness.” The scene takes place in an environment already familiar to the player, on a huge spiral ramp leading downward and in towards the center of a big crater. Link is at the top; Ghirahim and Zelda are at the bottom. Ghirahim summons literally every Moblin he can muster to delay our hero’s progress down the ramp. An army of monsters charges upward from the center of the crater, in greater numbers than the player has ever seen before.

The Moblin Horde

Every day they moblin’.

So far, so good: The game has assembled all the pieces necessary for a dramatic climax. The imminent soul stealing spell creates the illusion of a ticking clock (let’s choose to suspend our disbelief here) and pushes the situation forward, while the army of monsters is hopefully enough to give us pause and throw into question what the outcome will be (remember that ‘drama’, in games, can be defined as the confluence of inevitability and uncertainty). We know that Link is both powerful and courageous to the point of near-idiocy; we suspect Ghirahim is afraid. We expect that Link will save Zelda or die trying, and we are prepared to help him do so. We believe that he can succeed against difficult odds; possibly we are even willing to suspend our disbelief for a moment and ignore the fact that it’s a Zelda game and of course he’s going to succeed. This is the narrative and ludic climax.

Whereas every individual Moblin was once tough enough to challenge him individually, Link’s sword is now so powerful that it can kill them in a single hit. Furthermore, there are stamina powerups lying along the ramp that permit Link to use his ‘whirlwind’ area attack repeatedly. In my playthrough, this encouraged me to lead Link down the ramp in an impressive swath of destruction, destroying multiple enemies per attack and generally being a badass. I must confess I actually began to feel a little bit like the divine wrecking ball Link’s character embodies throughout the narrative. I felt engaged; I felt excited! I half-expected Kanye West’s “Power” to start blaring in the background (as it should at some point in all good videogames). Alas, it was at this point that things started going sideways.

Read more…