Mapping the Synesthetic Interface

last_of_us_screenshot-01

Ian here—

The following is the spoken presentation version of my talk from DiGRA’s 2014 conference in Snowbird, UT. The full paper, as drafted up for the conference’s proceedings, is available here. You can follow along with the visual presentation for this spoken version here.

Today, I’d like to address a cluster of game user interface design options that I have lumped together under the category of synesthetic interfaces. By this, I’m referring to interfaces that perform a sensory substitution, translating the information normally associated with one sense modality into the phenomenal forms normally associated with another. This is part of a larger interest of mine of examining approaches within game UI design in terms of the epistemic strategies they enact when establishing the relation of players to their avatars, and avatars to their worlds.

prezi_screenshot-synesthetic_interface-01

Now—what does this mean? Well, let’s consider some of the ways in which game UIs are usually discussed. Often, one sees a distinction being made between diegetic and non-diegetic elements. That is: those UI elements that exist within the space of the depicted gameworld, available to be seen or heard by in-game characters, vs. UI elements that communicate directly to the player, and are not experienced by characters in-world. Sometimes this distinction is upheld to forward the claim, increasingly popular in recent years, that the removal of traditional HUD elements in first-person shooters (and other games that put players in a typical avatarial relation with a player-character, offering them control of a single agent) results in increased “immersion,” as it better mimics our perceptual experience of the world.

prezi_screenshot-synesthetic_interface-02

There are also a few alternatives to the diegetic/non-diegetic division floating around in the discourse surrounding game interfaces. Sometimes, one sees more nuanced variations on its basic distinctions, such as Fagerholt & Lorentzon‘s attempt to re-map the division as a two-dimensionally conceived design space, in which they end up with six distinct categories addressing various aspects of a UI element’s status within a game’s fiction and position within its 3D rendered space.[i]

prezi_screenshot-synesthetic_interface-03

Kristine Jørgensen, meanwhile, has jettisoned “diegesis”-based terminology altogether, due to the ill-fittting assumptions it imports from literary and film theory, and instead proposes a division between “ecological information,” which is “represented as existing in the gameworld environment in a way that corresponds to how information exits in the real, physical environment,”[ii] and “emphatic information,” which “highlights or reinforces information that is not easily communicated through ecological means.”[iii]

There is some degree of diversity in these vocabularies. But it should be pointed out that all of these attempts to classify the elements of game UIs arrive with a common epistemological assumption: that the world is best considered as something essentially exterior to us, as an objective and independent physical reality that we gain access to via our senses. The options available for game interfaces, then, are to either adopt our normal mode of gaining information through our senses, or disrupt it in some way.

Now, this is certainly an understandable assumption to make. After all, it reflects a commonplace usage of the term “world,” and very commonplace ideas of our relation to it. My own work, however, pulls from the philosophical tradition of phenomenology, which tends to conceive of our relation to the world less as a form of scoping out a fundamentally exterior and independent reality and smuggling details about it back into our heads, and more about being always already embedded within a given situation. Our world, within this tradition, is not simply those physical things that surround us, and exist quite independently from us: it is always based on our relations to those things—the ways they help us act, and the ways in we can act upon them. The scope of (and constraints upon) our bodily abilities, our know-how, our social position and the roles we can adopt: All of these should be considered intimate parts of our world.

The examples of game UI I’m most drawn to are those that specifically acknowledge and speak to the idea that to step into the shoes of player-character is always necessarily to step into a new way of being situated within the world, to adopt new modes of perception, and understanding.

prezi_screenshot-synesthetic_interface-05

One corner of interest here is how games handle expertise. From Samus Aran, to Solid Snake, to Tony Hawk, one of the major sources of pleasure offered by videogames has long been their placing of players into the role of experts. (After all, how better to provide a fantasy of mastery?) Of course, the ways in which player-character expertise gets translated within a game’s mechanics and presentation varies wildly. Physical expertise is perhaps the easiest to translate: a player-character’s acrobatic grace and athletic adroitness is something players can easily inhabit, as long as games match simple-to-execute player inputs to elaborately impressive player-character animations.

But what about forms of expertise that are more perceptual than athletic? How do games successfully place players in control of characters who can see or hear better, or differently, than they can—who can pick up information the rest of us can’t, notice things the rest of us don’t, focus on details we would miss, and thereby understand situations differently than we do? Well, one way to do so is by turning to sensory substitution – that is, to synesthetic interfaces.

Take, for instance, The Last of Us. The Last of Us offers players control of Joel, a weathered survivor of a disease-spurred apocalypse, who displays many forms of expertise. Some of these are physical. Hand-to-hand combat skills, horse riding skills, skills a crafting and modifying weapons: these are easy to translate via input amplification—press or mash a button, get an impressive result.

Joel also, however, has one notable perceptual skill: his keen hearing, honed over decades’ worth of stealthy survival as a smuggler. This leads to a situation in which fictionally, Joel is an expert at careful auditory observation, while, on the other side of the screen, players, most likely, are not. The Last of Us bridges this gap by turning to a synesthetic vision mode.

When players activate a specialized “listen mode,” Joel “focuses his hearing,” and the game’s interface transforms: on the soundtrack, the game’s score and ambient audio fade to a whisper. On the visual display, sounds that Joel attends to become phenomenalized in visual terms.

Some explanation of what’s going on here: the locations of audio sources within hearing range are indicated by grey smudges on the edge of the frame. As the player maneuvers the camera, these smudges eventually give way to outlines of NPCs, superimposed upon the game’s geometry complete with visual indication of the sounds they are making to give away their position. For instance, you’ll see here that footsteps are indicated by circular ripple-like patterns beneath an NPC’s feet. Speaking, alternately, is distinguished as concentric halo-like patterns that emanate from heads of NPCs. This basic distinction allows astute and responsive players to point the camera in the direction of speaking NPCs, at which point their voices become audible and tactical orders they are barking to their teammates can be intercepted.

Now, at one point in their attempt to mark out a comprehensive design space for game UIs, Fagerholt and Lorentzon admit that strict categories are bound to break down on a granular level, since, at some points, “a sign vehicle might be non-diegetic (e.g. digits indicating how many bullets are left in a weapon clip) while the referent (the amount of bullets in the clip) may be of diegetic nature.”[iv]

The Last of Us, in its genuine attempt to situate us in Joel’s unique world, making us privy to the specialized way in which he perceives, confounds these divisions in even greater ways. We could, if we wanted to, classify the “sign vehicle” of listen mode as a representative of Fagerholt and Lorentzon’s category of “geometric” UI elements. But what is its “referent”? This question has no easy answer. On a certain level, its referent is environmental sounds: fully fictionally accounted for, but accessed in an unusual way. On another level, however, the referent of vision mode is not anything that is external to Joel at all: rather, it is his expertise in stealth and careful auditory observation (which is itself an element of the game’s fiction). The two arrive packaged together, inseparably, confounding attempts to separate out “diegetic” from “non-diegetic” through their parallel confounding of attempts to separate out an isolated “world” from Joel’s perceptual abilities.

Placing players into the role of an expert human is one thing. But it’s hardly the most radical challenge in game interface design. We can find even more daring examples of synesthetic interfaces in games that present players with non-human player-characters.

prezi_screenshot-synesthetic_interface-07

The history of videogame player-characters is, of course, littered with frogs, hedgehogs, and bandicoots—and, more recently, Pomeranians, octopi, and goats. The examples I want to turn to, however, make a greater effort than the average animal-avatar game towards actually placing players in the specific Umwelten of the animals they control.

I’m borrowing the term Umwelt from the biologist and pioneering biosemiotician Jakob von Uexküll, who noted that different species of organism possess such drastically different ranges of perceptual experience and available actions that they effectively each exist in a different environing world, or Umwelt. “Every subject spins out, like the spider’s threads,” Uexküll writes, “its relations to certain qualities of things and weaves them into a solid web, which carries its existence.”[v]

For example: For ticks, finding a warm-blooded host takes precedent over other concerns, and so ticks live in a world in which warmth and the smell of butyric acid are the only signals that matter, and attentiveness to these signals is the the only perceptual experience that matters. Honeybees, on the other hand, live in a world of open blossoms and closed buds, so these are the paramount features that define their perceptual experience, and their world.

prezi_screenshot-synesthetic_interface-08

Many of gaming’s most robust attempts to portray the Umwelten of animal avatars display a tendency towards synesthetically substituting smell into vision. We can postulate some contributing factors to this. There is already a rudimentary language in place of “depicting” scents in visual media (e.g., stink lines in comics). Secondly, the sense of smell looms large in most peoples’ conception of nonhuman perceptual experience, especially that of other mammals. Finally, there are practical considerations for game mechanics in play:

Glowing scent “clouds” are a convenient way to highlight interactable objects, turning player attention to the more relevant portions of the environment. As such, they’re a good way to provide straightforward guidance to the player. Scent “trails,” meanwhile, offer up an orientation system that is both elegantly straightforward and can feel more fictionally relevant than the use of waypoint markers or compass arrows. As Midna of The Legend of Zelda: Twilight Princess will attest, sometimes it’s just more convenient to place players in the world of a canine.

This type of olfactory-to-visual substitution as a way of presenting player guidance is, however, far from the most ambitious or interesting tactic in the portrayal of nonhuman Umwelten in games.

For a more radical example, we can turn to the 2005 Gamecube game Geist, which features a spectral protagonist who can inhabit multiple nonhuman animals and inanimate objects throughout the course of the game, and which outdoes Dog’s Life and Twilight Princess on a number of fronts. It renders smell across multiple output channels (rather than simply visualizing it). It recognizes a nonhuman Umwelt as an unavoidable consequence of inhabiting a nonhuman body (rather than a vision mode that can be toggled on and off at player’s convenience). And, finally, it treats scents as potential obstacles to player-character (rather than merely a guide). All of these design choices come through in one particular moment in the game:

Initially inhabiting a human host, the player must make their way down to a basement area, where they must de-possess their human host and transfer their consciousness into a rat, for puzzle-related reasons. This transformation brings with it the adjustments of scale and movement possibilities one would normally expect. The moments ahead, however, also contain a dastardly surprise for the player: the once-innoccous cheese-baited traps dotting that map that the player probably did not even notice as a human have now become deadly hazards. And avoidance of them is not simple.

Once within scent-range of these traps, players are greeted with three cues: a graying of the edges of the screen, simulating tunnel vision, a heartbeat-like pulse emanating from the GC controller’s haptics, and a loss of control over their rat body. Once within “olfactory orbit” of these sites, the rat’s involuntary and base attraction to the smells of its environment prompt a gradual drift towards the traps, which players must actively compensate for by yanking the analog stick in the opposite direction.

“Every animal is surrounded by different things,” writes Uexküll; “the dog is surrounded by dog things and the dragonfly is surrounded by dragonfly things.”[vi] Geist wonderfully translates this edict into gameplay terms: What were, for a human, mere incidental details become, for a rat, dangerous sensory attractors, basins of perilous desire.

Synesthetic interfaces, we’ve seen, form a crucial component in game developers’ toolbox when it comes to the representation of epistemic otherness. Animals and experts, the UI design of these games gently implies, lay on a common spectrum: they come to know their world by different means from the rest of us, built up from the different bodily and sensory relations they have to the situations at hand.

The examples we’ve looked at today vary wildly in ambitiousness and in degrees of success. But, given that offering the opportunity to see the world as others see it is one of the greatest gifts games can potentially offer, we can only hope that these types of experiments in situating us in different points of view (or audition, or olfaction) continue to develop in interesting directions in the future.

[i] Fagerholt, Eric, and Magnus Lorentzon. “Beyond the HUD: User Interfaces for Increased Player Immersion in FPS Games.” MS Thesis. Göteborg, Sweden: Chalmers University of Technology, 2009.

[ii] Jørgensen, Kristine. Gameworld Interfaces. Cambridge, MA: MIT Press, 2013. Pg 79.

[iii] Jørgensen, Gameworld Interfaces, pg 85.

[iv] Fagerholt and Lorentzon, “Beyond the HUD,” pg 46.

[v] Uexküll, Jakob von. A Foray into the Worlds of Animals and Humans, with A Theory of Meaning. Translated by Joseph D. O’Neil. Minneapolis, MN: University of Minnesota Press, 2010. Pg 53.

[vi] Uexküll, Jakob von. “The New Concept of Umwelt: A Link Between Science and the Humanities.” Translated by Gösta Brunow. Semiotica 134:1-4 (2001): 111–23. Pg 117.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s