Coding Ethical Codes

As most sane people will tell you, videogames are quite different from real life.  Stepping into a virtual world means accepting that you are entering a space where the normal rules are temporarily suspended in favor of the game’s rules.  Huizinga calls this the “magic circle.”1 Thus, a moral person may do seemingly immoral things such as killing or stealing in the context of a game because exploring these issues is often the whole point of the game.  The way in which the rules of these virtual worlds are designed has a major impact on the overall experience of playing the game.

Often, the ethics of a virtual world are simply enforced upon the player by limiting certain kinds of interaction. In The Legend of Zelda, for example, the hero can (and often must) kill all variety of creatures with his sword and other weapons.  Other actions, such as killing a shopkeep or stealing his goods are not allowed.  The hero can swing his sword at him, but it simply passes right through him.  Killing innocent merchants is unheroic, therefore, the hero cannot do it, even if he tries.  Other games are less explicit with questions of ethics.  Villagers in Minecraft sell helpful items to the player, much like the merchants in Zelda.  They are also frequently in need of assistance to save their villages from the same monsters that the player has to deal with.  Although there is an implicit hero role for the player to fill, she needn’t actually help the villagers and is perfectly capable of killing them herself.  While playing the hero and having a flourishing village near her castle might be more advantageous, the player is just as free to take on the role of a murderous warlord, leaving a trail of lifeless ruins in her wake.

Still other games take a middle route, allowing the player to make a wide range of actions, but providing an in-game ethical code to guide and evaluate the player’s actions.  These can be simple good-versus-evil meters, such as karma in Fallout 3 and the light-versus-dark mechanics in various Star Wars games, or they can be more complex, such as the eight virtues in Ultima 4.  In either case, such a system requires game developers to assign a certain moral value to the potential actions that the player can perform in the game.  The way that developers choose to define ethical behavior within the game world has a significant impact on the overall experience of playing the game.

One of my favorite games to deal explicitly with these kinds of ethical mechanics is Introversion Software’s Uplink.  Although never reaching mainstream success, Uplink is one of the most iconic games ever created about computer hacking.  The game puts the player in the role of an elite hacker, specializing in corporate espionage.  By completing jobs, the player can earn money to upgrade her software and hardware in order to defeat increasingly complex security systems.  As one might suspect, essentially everything the player does in the game is framed as being illegal within the context of the game world.  The game, however, provides an alternate ethical system in the form of the player’s “Neuromancer rating.”  This rating, which changes over time as the player completes missions, purports to evaluate the player’s actions not on their legality or their conformity to broader societal ideals, but on how these actions conform to the ideals of the hacker community.

Scholars such as Steven Levy have noted that hackers do, in fact, tend to have strong commitments to ethical standards that differ somewhat from those of society at large.  This “hacker ethic” is based upon ideals that access to computers and information should be unrestricted and universal2.  This often brings them into conflict with other organizations over matters such as copyright, where freedom of information is restricted in favor of other societal values that are deemed more important.

Playing Uplink with an understanding of the hacker ethic, however, the player may find that the Neuromancer rating seems somewhat arbitrary and unpredictable.  While it seems appropriate that stealing research information from one company and sharing it with their competitors would improve your ethical standing, it would reason that the opposite, destroying information to prevent anyone from benefiting from it, would be bad.  Surprisingly, both of these acts improve your Neuromancer rating, whether you are making information more freely available or not.  With the ethical implications of individual missions being difficult to determine without considerable amounts of trial and error, the Neruomancer rating serves very poorly as a moral compass.

In actuality, Neuromancer ratings have little to do with your actions themselves, but instead focus on the target of those actions.  Any attack you perform on a corporation boosts the player’s Neuromancer rating.  Any attack that targets an individual or another hacker drops it.  This system is problematic for a number of reasons.  First, it implies a strict “us versus them” relationship between hackers and corporations, which is an overly simplistic (though not entirely uncommon) view of hacker culture.  While the two are often at odds, this conflict is a result of conflicting ethical systems, rather than a core tenet of hacker ethics.

Additionally, the Neuromancer system lacks internal consistency.  While helping to track down another hacker is considered unethical, so is helping another hacker by creating fake credentials.  Clearing a criminal record, on the other hand, seems to be considered good, even though it targets an individual in a similar manner to the “fake credentials” mission.  In the end, there is little way to determine the ethicality of an action by applying a general ethical framework.  The player can do little else but try each mission and attempt to divine its effect on her Neuromancer rating.

While I find the idea of the Neuromancer rating to be quite intriguing, its implementation in Uplink creates a problematic ethical system that is neither useful to the player, nor representative of the hacker community.  To the developers’ credit, I actually find that the player can find a more interesting and nuanced view of hacker ethics by ignoring the primary ethical mechanic in the game and simply paying more attention to the small bits of static narrative that are inserted throughout the length of the game.

References

1. Homo Ludens: A Study of the Play-element in Culture. Johan Huizinga.
2. Hackers: Heroes of the Computer Revolution. Steven Levy.

The Beginning and End of the Internet

A few weeks ago, I was able to attend the 2013 Frontiers of New Media Symposium here at the University of Utah. Due to growing concerns over censorship and surveillance in the wake of the recent NSA scandal, the symposium, which looks at Utah and the American West as both a geographical and technological frontier, took on a somewhat more pessimistic tone than in previous years.  Its very apt theme, “The Beginning and End(s) of the Internet,” refers to Utah as being both one of the original four nodes on the ARPANET and being the future home of the NSA’s new data storage center, which, as a number of presenters pointed out, may indeed be part of the end of the open Internet as we know it.

While the main themes of the symposium were censorship and surveillance, the various presenters connected these themes to a much broader scholarship.  As always, the importance of physical geography to online interactions was a frequently discussed topic.  With the knowledge that the NSA has extensive ability to surveil any data that is stored or even passes through the United States, discourse that envisions “the Cloud” as an ethereal, placeless entity becomes very problematic (or at least more so than it already was).  There was also considerable discussion of the ways in which the Internet and online behaviors have changed since the early days of the Web.  While there was not always consensus on how accurate our vision of the early Internet as a completely free and open network was, everyone seemed to agree that a great deal has changed, perhaps irrevocably.  As Geert Lovink pointed out in his keynote, people no longer “surf” the Internet.  They are guided through it by social media.  Additionally, the user base of the Internet is shifting away from the United States and the West.  Brazil is trying to create its own Internet, while oppressive governments around the world are essentially trying to do the same thing that the NSA has apparently been doing for years.

With so many weighty matters on the table, there was certainly a lot to take away.  One theme that stood out to me, however, was the importance of understanding code in relationship to Internet participation.  This might seem counterintuitive to most current discourse surrounding the Internet, as technologies like Dreamweaver, WordPress, and Wikis are lauded for their ability to lower the barriers of entry to Internet participation and almost completely remove the need to learn code at all.  Why does anyone need to learn HTML in this day and age?

One of the topics that I always try to stress in my classes is that of media literacy.  In most popular discourse about Internet literacy, there is an inherent assumption that kids who grew up after the birth of the World Wide Web are endowed with this ability from an early age, generally surpassing their parents and older siblings by nature of being “born into” the Internet age.  Internet literacy, however is much more than the ability to open up a browser and update your Facebook page.  Literacy is not simply passing the minimum bar for using a medium.  It also involves being able to create and understand messages in that medium.  While basic ability to use the Internet is becoming more and more widespread, the prevalence of phenomena such as cyberbullying, phishing, and even trolling demonstrate significant and potentially dangerous deficiencies in many people’s understanding of how the Internet works.

While technologies such as Facebook, Instagram, and Wikipedia give many users the ability to become content creators, this ability comes at a price.  In exchange for ease of use, these technologies dramatically limit the kind of content that can be created, as well as the social and legal contexts in which this content can exist.  Lovink referred to these restrictions as a loss of “cyber-individualism,” where the personal home page has been replaced by the wiki and the social network.

These changes were perhaps best illustrated by Alice Marwick’s presentation on the transition from print zines of the 1980s to the modern blog.  Print zines, which were generally small, handmade magazines created on photocopiers, embodied the punk rock ethic of do-it-yourself culture.  Zines were most often had a subversive or counter-cultural aesthetic, and many zines had a strong feminist thread running through them.  Many of these same values and aesthetics migrated to early web zines and personal homepages, spawning a number of feminist webrings (remember those?).  However, as homemade HTML pages slowly gave way to blogging software and other accessible content management and hosting services, the nature of these publications changed.  While zines and many of the early text files that were passed around the Internet were largely anonymous, true anonymity on the Internet had become difficult long before the NSA started undermining online security.  Most bloggers don’t host their own sites, which means that their content has to comply with the rules of Blogger, LiveJournal, Tumblr, or whatever other service they might be using.  These rules are in turn shaped by legal and economic factors that rarely shift in favor of small content creators.  The subversive counter-culture, feminist, and anti-capitalist themes that were a significant part of zines as a medium are rarely found in the modern blogosphere.

Fortunately, there are still small but determined groups that still embrace the do-it-yourself ethic as a means of self-expression.  In the field of videogames, some of the most interesting (though often overlooked) games are dubbed “scratchware” after the Scratchware Manifesto, which condemned many of the destructive industry practices that arose in the 1990s and demanded a revolution.  Though the videogame industry itself has done little to address most of the issues brought up in the Manifesto, a number of talented creators have taken its ideas to heart, including Anna Anthropy, author of Rise of the Videogame Zinesters and developer behind games such as Mighty Jill Off, dys4ia, and Lesbian Spider-Queens of Mars.  Although much of what was zine culture has been absorbed into the more homogeneous mainstream, there are still people out there bringing the zine into new media.

Even if you don’t necessarily fall into the counter-cultural zinester category, if you participate in Internet culture, it is important to learn about code, even if all you learn is HTML.  If not, you will always be limited to participating on someone else’s turns, whether it be Twitter, Google, or Facebook (all of which, at this point, might as well be the NSA).  Sites that offer introductory courses in coding like W3Schools and Codecademy have made it so that the only excuse for not knowing how to code is simply not trying.

While it may not be possible to return to the (possibly mythical) beginnings of the Internet, it needn’t end here.  Anyone can code.  Anyone can hack.  Clouds can be dismantled.  Censorship can be circumvented.  It’s time to take the Internet back.

Not a Game

Over the last few months, an increasing number of people have started to ask my opinion (or let me know all about theirs) regarding the boundaries of videogames as a medium.  More often than not, this means affirming someone’s statement that “X is not a game.”  Indeed, there seems to be a recent push to draw a line in the sand, with games on one side and “non-games” on the other.  Nintendo president Satoru Iwata applied this term to games like Animal Crossing which don’t “have a winner, or even a real conclusion.”   Certainly, Iwata is not the first person to attempt to place less conventional games into a new category.  As I have mentioned before, Maxis referred to a number of their products such as SimAnt not as games, but as “software toys.”  Recently, however, it seems that people have taken this as a call to erect hard and fast barriers between these different categories, lest some unsuspecting fool accidentally mistake something for a game that clearly is not.

These attempts to define games (though “redefine” would be much more accurate description of this process) are based on a number of rather dubious assumptions.  First and foremost is the assumption that this new, more narrow definition of what a game is somehow more accurately captures what the word “game” means.  However, this seemingly cut-and-dry definition of a game has little resemblance to its historical usage, nor to the act of playing games.  Games have never been limited to only those activities in which there is a clear winner.  To assume such a thing is not only presentist, but is also a rather modernist, male-centric way of looking at games.

So what do I mean by all these academic sounding words?

Basically, by declaring that “games have winners and this is the way it’s always been,” we automatically invalidate vast amounts of traditional activities normally categorized as games, especially games that are traditionally associated with girls.  Games like playing house, jumping rope, and clapping games have long been important parts of children’s culture.  Some of these, like clapping games, even share many similarities with modern cooperative games.  There is a fairly clear goal (completing the clapping pattern) that requires both players to perform a task with a considerable amount of skill.  There is also a very clear failure state when one of the players misses a beat.  Indeed, if we begin to break down such games to their basic rules, stripping away the layers of gendered meaning, there are very few fundamental differences between clapping games and Guitar Hero except that one is digitally mediated and the other is not.

Playing House with MarioWhile there are clearly issues with both historical fidelity and gender politics in many of the ways people try to draw boundaries between “games” and “non-games,” it’s not just the way we draw lines that’s problematic, it’s the fact that we’re trying to draw lines at all.  The idea that everything can be accurately categorized and placed into a grand, cohesive narrative is one of the hallmarks of modern thought.  As someone who has a significant leaning toward the postmodern, I find any hard-and-fast definition of “what is and what is not a game” hard to reconcile with reality.

Let’s say for the sake of argument that we want to enforce a strict definition of “games” and “non-games.”  We can use Iwata’s criteria of having both a winner and a real conclusion.  A winner can be either one player beating another, or one player overcoming the game.  A conclusion is a bit more vague, but generally encompasses some degree of finality.  You have achieved the main objective of the game and can consider your experience somehow completed.  This does not necessarily mean that there is nothing left to do in the game, but there is a sense of arriving at some agreed upon end point.

Naturally, many games fit into this mold rather nicely.  Super Mario Bros. is a game because you can win by reaching the princess and although she gives you a new, harder quest, it is really just replaying the game with a slight variation.  Finding the princess is easily identified as a conclusion to the overarching game narrative.  Likewise, Metroid, Mega Man, and many other games have a clear win state and easily identifiable conclusion, often explicitly labeled as “The End,” after the style of old movies.

TetrisOther “games” don’t fit so well into this definition.  Tetris, in many of its incarnations, has no way to win (though as my sister once demonstrated to my surprise, the NES version actually reaches a sort of win state where little dancers rush out on the stage and do a victory dance).  It is certainly easy to lose, but no matter how long it goes on, the blocks just keep coming.  If you can’t win at Tetris, what’s the point of playing?  Is it to beat your high score?  Is it to beat your friends’ high scores?  Is it just to play around for fun for as long as you feel like it?  That doesn’t sound our hard definition of a game.  What about other notable games like World of Warcraft?  How do you beat WoW?  Is it merely a question of defeating all the bosses?  Is it getting your character up to the maximum level cap?  Can you only beat the game if you complete every achievement ever?  Does that mean that World of Warcraft is a game that only one person has ever won?  While there is certainly some logic in each of these definitions, none of them really seem to encompass what WoW is all about.  Some of the main motivations for playing the game are social interactions with other players and self-expression through creating and advancing an avatar.  Does that make World of Warcraft a sandboxy non-game like Minecraft or even…Second Life?

Any kind of definition for games that potentially leaves out such noteworthy titles as Tetris and World of Warcraft should immediately raise a few red flags.  While you could argue for a more precise definition than Iwata’s, adding more and more levels of nuance is unlikely to unmuddy the waters.  Can you draw a clear line between a very linear game and an interactive story?  Can you identify the key differences between a game like Warcraft III and a non-game like SimAnt?  Can you say that a bunch of college guys tapping on plastic guitars are playing a game, while a bunch of grade school girls clapping their hands are not?

Names and categories are important for the way we create meaning and in general, the term “game” is a fairly good category (the term “videogame” can be a bit more trouble, but that’s a different debate).  I am clearly more interested in studying games than agriculture.  I would even consent that certain things are more “gamey” than others.  I also have no objections if developers want to call the thing they create “software toys” or “non-games” if that is how they want to situate themselves in relationship to the existing games ecosystem.  Where I take issue is when one group sees it as proper and necessary to deny the use of a word for some object to which it can certainly be applicable.  I consider this to be an example of what Donna Haraway terms (quoting Watson-Verran) “a hardening of the categories.”1 Words and meanings are not static, neither are they apolitical.  Telling someone that something is “not a game” is not a matter of fact.  It is a value-laden assertion.

References

1. Modest_Witness@Second_Millenium.FemaleMan_Meets_OncoMouse. Donna Haraway.

Playing the Past

JoystickIf you’ve followed my blog for any length of time, you’ve probably noticed that I am quite interested in history and that probably about a third of my posts are inspired from Play the Past, a collaborative blog about videogames, history and the places where they meet.  Starting this week, I am now going to be a regular contributor to Play the Past, which means you should be seeing my stuff there about once a month.  I currently have a post up about Historical Wargaming and Asymmetrical Play, which manages to incorporate two of my obsessions, Axis & Allies and Scandinavian history, into a single post.  Now if I can just find a way to write a post about Saturday morning cartoons, Gruyere cheese, and dinosaurs, I’ll be in business.

Hopefully, having a deadline to post over there will help me be more regular about posting over here, as well.  As you may have noticed, I haven’t been posting very regularly at all since I began working on my Master’s Thesis, and pretty much stopped during the last semester of my Master’s program.  Now that I’ve finally managed to sort out that whole mess, I will hopefully not be quite so sick of writing about videogames and I might be able to squeeze out a post from time to time.

I’m very excited to be a part of Play the Past, which already includes a number of scholars whose work I’ve admired for some time.  A few posts have even been required reading for some of the classes I’ve taught over the last few years (they’re also usually a bit more accessible to undergrads than a journal article).  Hopefully, many more amazing posts and interesting conversations will come out of Play the Past in the future, and I’m very glad to be a part of it.

Game Design vs Whaling

Save the WhalesEarlier this week, Mike Rose posted an excellent article on Gamasutra about the ethics of free-to-play game design. Despite similarities in name, free-to-play games are not the same as freeware games. While the latter are games that are actually given away by their developers, free of charge, free-to-play games entail a business model in which the game itself can be downloaded and installed for free, but (and rather ironically, I might add), actually playing the game involves in-game purchases of some kind. The nature of these purchases varies from game to game, from some purchases being merely some form of cosmetic status symbol to others where it is difficult or impossible to play the game without paying. While even in successful games, most players tend to spend relatively little money on such purchases, a small percentage of the player base spends massive amounts of money playing the game. These high-spending players are referred to as “whales.”

Rose’s article, which was the culmination of two months of work, focuses specifically on the ethics of a business model that exploits whales, and those among the whales who are not just big spenders, but compulsive spenders. The individual stories he relates are typical (but certainly no less disturbing) tales of addiction. Savings are lost, relationships are strained, and a once enjoyable pastime turns into a spiral of compulsion and depression.

The article comes at a very timely moment. Among developers, the free-to-play model is seen increasingly as the way of the future, ready to sweep away the traditional home purchase model just as it swept away the coin-op arcade model. At the same time, more players seem to be increasingly fed up with in-game purchases. Over the last few years, the exploitative nature of the free-to-play model has become one of the topics my students have most frequently asked me about in class. It has also become one of the topics that students are most outspoken about. While students in an upper-level videogame studies course are hardly a representative sample of the game-playing public, these trends suggest to me that the unrest and dissatisfaction at the free-to-play model is far more widespread than developers believe it to be.

The discussion on Gamasutra also seems to support my assumptions about the public perception of free-to-play games. Although there are some who defend the model (some being obvious trolls), most of the comments seem to echo the dissatisfaction of my students. In all this discussion, however, the focus seems to be on whether or not the plight of the whales is sufficient to render the whole business model unethical. While there are certainly compelling arguments that it does, I would argue that the flaws in the free-to-play model go beyond encouraging blatantly self-destructive behavior.

Pushbutton 2012 Mobile Games PanelLast fall, I attended the 2012 Pushbutton Summit, where I attended a presentation about mobile game design. Unfortunately for me, the presentation had almost nothing to do with designing game mechanics or virtual environments, but rather was about how to tweak your game so as to maximize in-game spending by players. The panelists discussed how “AAA games don’t take advantage of the whales” and about the optimum amount of free in-game credit to give players to get them hooked on payed content (apparently it’s about five dollars, if you’re wondering). The recurring themes were that of transitioning players from non-paying (which they referred to as “minnows”) to paying (or “dolphins”) and ultimately to whale status, as well as how to set up your purchases so that you are able to funnel away the maximum amount of available funds from each of these groups. In other words, getting the dolphins to spend as much as they are willing to spend, and making it possible for whales to theoretically spend forever.

Pushbutton 2012 Mobile Games PanelNot surprisingly, I disagreed with a great deal that was presented in the panel. Any business model that is designed to be able to take an infinite amount of money from customers warrants intense scrutiny, at the very least. However, the problems don’t stop there. Trying to squeeze every last cent that you can out of the dolphins and minnows before making them ragequit isn’t exactly my idea of responsible (or sustainable) game design, either. There’s nothing wrong with wanting to make a product that people want to buy and want to spend their hard-earned cash on. However, when the focus shifts from making a product to simply collecting the money, it’s a problem. As one commenter put it, designers move from being game designers, to simply being revenue designers.

I have argued elsewhere about the problems that come from fundamentally shifting the business model of the videogame industry (not that it couldn’t be shifted in positive ways, but that rarely seems to be the issue). I generally find the shift to the free-to-play model to be one of the more disturbing trends in videogames today, especially when consequential projects like the OUYA console make free-to-play a mandatory aspect of games on their console. Fortunately, free-to-play seems to have a different connotation on the OUYA, as most of the popular games are really more of a paid download with a free demo version (though there are a number of other models, such as “nagware,” as well). I find it an interesting and significant rhetorical move that since their original Kickstarter campaign, the OUYA team seems to have changed “free to play” to “free to try.”

The free-to-play model is hopefully starting to receive the public scrutiny that it desperately needs. I only hope that as we debate its ethics and utility that it’s not only the whales that we are trying to save, but the dolphins and minnows as well.