Old Weird Games: Princess Maker 2

Princess Maker Street DuelLast week at GDC, Leigh Alexander announced the reboot of Offworld, the videogame offshoot of Boing Boing.  I was pleasantly surprised to discover that one of the site’s first articles was on Princess Maker 2, an obscure game most people (including most developers I know) have never heard of.  This hardly surprising, because while the series was quite successful in Japan, only the second game was ever translated into English and the localization project ultimately fell apart before the game was officially released.  Thus, the only thing that remains of the English version of Princess Maker 2 is a leaked version of the mostly completed game.

Princess Maker HouseworkFor most Americans encountering the game for the first time, the experience is likely to be bizarre.  It is set in a fantasy world modeled after a Medieval European city as imagined by Japanese developers for a Japanese audience.  There is a strange collision of cultural tropes and social mores that could be uncomfortable for some players.  The game juxtaposes Catholic nuns with Roman(ish) gods and Slavic house spirits.  While the level of violence in the game is fairly tame compared to North American games (Princess Maker 2 was released the same year as Doom), the fact that the game deals with sexuality at all was rather scandalous for the time.  Coincidentally, had the game actually been released in North America, it would have hit the shelves at roughly the same time that US Senators were calling Night Trap, a game where the sexual content consisted of a girl in a nightgown, “sick” and “disgusting.”

Perhaps the most problematic aspect of the game is the fact that the player is put in the roll of the protagonist’s adoptive father.  You don’t actually play as the princess, but as the titular princess maker.  The player is not invited to identify with the girl on screen as much as think of her as a very complicated Pokémon or virtual pet.  As Kim Nguyen points out, this can get creepy really fast, depending on how you as the player-dad decide to shape your daughter.  By taking away direct player control of the aspiring princess and instead routing that control through an unseen male caretaker, it becomes very difficult to avoid a certain degree of voyeurism and objectification as you silently watch your daughter obediently perform any task you put on her monthly schedule.

While this dominant reading of Princess Maker 2 (if I may put on my Stuart Hall hat for a moment) reflects the patriarchal nature of the relations of production and knowledge frameworks involved in the creation of the game, it is certainly not the only possible reading.  Indeed, as Hall notes, one of the most significant political moments is when these dominant ways of understanding media messages begin to give way to oppositional readings that consciously push back against the dominant-hegemonic understanding.1

Princess Maker GraveyardAlthough my own reading of Princess Maker certainly contrasts with the dominant reading, I hesitate to call it oppositional, at least in the way Hall uses the term.  My counter-reading of the game came not from my critical perspective as an academic, but from my very uncritical perspective as a teenager.  I learned about Princess Maker from a metalhead friend of mine (who also introduced me to games such as Ultima) and was intrigued by the idea of a game that followed a female protagonist who went about doing traditionally feminine things.  Approaching it as I would any Western game, I made two broad assumptions about Princess Maker.  First, I assumed that any game with the word “princess” in the title was created with girls as the intended audience.  Second, I assumed that the character in the middle of the screen that I looked at most of the game was supposed to be my avatar.  While I read the opening narrative about being an injured warrior gifted with a daughter from the heavens, I incorrectly assumed that the father figure was meant as a diegetic way of delivering information or implementing certain game mechanics, much like Deckard Cain in Diablo or the Zerg Overmind in Starcraft (this role is actually performed by Cube, the family butler).  As far as I was concerned, I wasn’t playing as a retired war hero – I was a magical ten year old girl and I was going to grow up to be a princess!

On my first attempt at playing the game (not counting several times I mismanaged my funds and starved to death), I did my best to play the game as I believed it was meant to be played.  I spent a lot of time teaching my daughter (for the sake of consistency, I’ll try to avoid slipping into the first person) art and decorum while giving her lots of free time to keep her stress levels down.  When she turned fourteen, she developed a rivalry with a dancer in town and became determined to beat her at the harvest festival dance competition.  Although my daughter was an amazing artist, she wasn’t particularly good at dancing.  Rather than encouraging her to stick with her art skills, I decided to support her new dancing obsession.  She lost horribly to her rival the first year, so I was determined to help her win before her eighteenth birthday.

Princess Maker SalonFor the next few years, we focused on making her the perfect dancer.  A dancer needed artistic skill, constitution, and charisma.  She stopped working for the mason (which lowered her charisma) and started working at the salon instead.  All our spare cash went to buying her more dance lessons. As her final competition approached, I saved up some money to buy her a nice silk dress instead of the old cotton one she had worn for years.  Unfortunately, the dress didn’t fit.  I put her on a strict diet and just in time for the competition, she was able to finally able to fit into her new dress.  As I recall, she was able to finally beat her rival that year, but didn’t quite manage to take first place in the competition.

After she left the house, she became an assistant dance instructor.  Although she was incredibly skilled at dancing, all the dieting had made her kind of weak and frail, so she was never really able to make it as a professional dancer.  She was disappointed.  I was disappointed.  Even the gods were disappointed.  It sucked.

That was the last time I listened to what the game wanted me to do.

The next time I played, I gave my daughter a healthy, robust diet right from the start.  Although not the most lady-like jobs, she worked on the local farm and as a lumberjack.  Once again, she became rivals with the dancer, but I decided to enter her in the painting contest instead of the dance party.  Although she was initially upset, her disappointment faded when she took first place in the art competition.  I used the prize money to buy her a well-rounded education, while saving a little for us to go on the occasional vacation.  She learned to navigate the intrigue of the royal court and went adventuring beyond the city walls.  The game kept giving me subtle and not-so-subtle hints that my daughter was too chubby, but this time, I ignored them.

Princess Maker Dance PartyWhen her final harvest festival came around, I finally let her compete in the dance competition.  The only dress in the shop that would fit her was the slightly frumpy cotton dress, but she had a stronger constitution than any man in the kingdom (she had, after all, won the combat tournament the year before).  She may not have been the thinnest or the most feminine dancer in the competition, but she blew everyone else away with her technique.  She easily won the competition, more than tripling the score of the nearest competitor.

In the end, she didn’t stick with dance, but became a professional writer instead.  Not only did she have a successful career, she ended up marrying the prince!  Yes, my adventurous, husky, wood-chopping, poet daughter had become a princess.  I certainly don’t think this was exactly how the developers envisioned the game being played.  After all, who creates art assets for half a dozen dresses that no one can fit into?  Still, with the hundreds of different mechanics and statistics that the developers had woven together, this was one of the possibilities that they had enabled.

This is the reason I frequently bring up Princess Maker 2 in game design classes.  Although I run the risk of being that hipster guy who is always talking about unreleased Japanese games that you’ve probably never played, Princess Maker manages to interconnect its mechanics in ways that most games don’t, transcending the common dichotomies between masculine and feminine traits.  The sensitivity you learn working at a hair salon can help you sense magical creatures or hide from undead warriors.  The conversation skills you pick up waiting tables at a bar might help you talk your way out of a dangerous fight, while the physique you build chopping wood can make you a strong, athletic dancer.

Princess Maker GlacierThis overabundance of attributes also allows Princess Maker to subvert a number of problematic tropes that are common in RPGs.  In your standard Dungeons & Dragons inspired RPG, you take control of a heroic warrior who is still considered heroic despite the fact that he spends most of his time roaming the land, killing things and taking their stuff.  While there are often plenty of diegetic reasons to avoid combat, procedurally, murder and theft are unequivocally positive activities.  Princess Maker allows the player to do some adventuring and dungeon crawling, but killing monsters, even evil demons, gives your player sin, one of the most negative attributes you can have.  You can certainly still play your character D&D style, accumulating a small fortune by slaughtering the local fauna, but people will treat you like the bloodthirsty barbarian you are.  Although the sin mechanic is still bit of a heavy-handed way of addressing the problem, it is much less so than enforcing the player’s heroic ethic through the more common Zelda-style unkillable NPCs.  Instead, it makes stealth and diplomacy much more attractive strategies for adventuring and turns violence (of which there is still plenty) into the last resort that we would expect it to be for our heroic protagonist.  It also doesn’t disallow the option of playing as a ruthless bandit who attacks merchants on the road to fund her lavish lifestyle.

I think Princess Maker 2 is can be a valuable game for designers, but I also think it can be a valuable game for girls.  While there are plenty of significant issues with the design of the game, it’s important to remember that media texts like games are not closed ideological systems, but inherently polysemic constructions that reflect the contradictions of their production.2 Videogames, like other media artifacts, are sites of negotiation, interpretation, and resistance, and I think there are a lot of reasons that games like Princess Maker are worth fighting for.

Princess Maker StatsAlong with more well-known games from the 1990s like Doom and Quake, Princess Maker 2 was also a contemporary of Barbie Fashion Designer. The game, which was an unexpected success with young girls, spawned countless other “Pink Games” hoping to cash in on this new and mysterious non-male demographic. While there are certainly good things to be said for creating a game for an underserved and marginalized group, Barbie Fashion Designer was (not surprisingly) very problematic in its depiction of gender norms. Additionally, as Justine Cassell and Henry Jenkins point out, it “restricts the potential for appropriative and resistant play, facilitating the creation of ‘miniskirts and wedding dresses’ but not of the work clothes needed to create ‘Barbie Auto Mechanic or Barbie Police Officer.'”3 Although a successful American localization of Princess Maker 2 might have provoked the same kind of moral panic created by Night Trap, Princess Maker succeeds in precisely the way that Barbie Fashion Designer fails.  Although the game defines a certain feminine ideal and reinforces this ideal through points systems and in-game messages, it gives the player a remarkable ability to reinterpret and resist that ideal.  Your daughter could marry the prince, or she could remain single.  She could grow up to be a dancer, the minister of state, a bandit, or a dominatrix.  I doubt that any Barbie game will ever give the player such latitude.

I think Alexander sums it up best in the last line of her article.  “There’s something very special about this genre in general, and how it lies at the intersection of many players’ desire to have control with their desire to give care.”  Princess Maker 2 is a game fraught with contradictions.  It can be both deeply satisfying and deeply unsettling.  I’m excited to look into Long Live the Queen, which follows in the traditions of child-raising sims like Princess Maker, but removes the problematic unseen father figure and lets the player play as the princess herself.  I hope, though, that this won’t be the extent of Princess Maker’s legacy.  With it’s rich system of interconnected mechanics and incorporation, however problematic, of many more feminine themes, Princess Maker is a worthwhile game for any designer to study.

 

References

1. Encoding/Decoding. Stuart Hall.
2. Feminist Media Studies. Liesbet van Zoonen
3. Chess for Girls? Feminism and Computer Games. Justine Cassell and Henry Jenkins. From Barbie To Mortal Kombat: Gender and Computer Games.

On Ptolemy Bashing

PtolemyIt’s hard to discuss science communication without mentioning Carl Sagan.  To this day, he remains the quintessential example of the public intellectual.  Few, if any, examples of science communication have ever been as successful at connecting scientists and the general public as Sagan’s Cosmos: A Personal Voyage.  It took complex scientific knowledge and presented it in a way that was both accessible to viewers, as well as compelling and enjoyable to watch.  Through Cosmos, Sagan influenced not only the way we think about our place in the universe, but the way that we understand science itself.

During my most recent viewing of Cosmos, something stood out to me.  In the first episode, “The Shores of the Cosmic Ocean,” there is a scene where Sagan contemplates the Library of Alexandria and the scholars whose knowledge might have been contained therein.  He mentions the wonderful achievements of a number of great scientists before ending on a slightly odd note:

…and there was the astronomer Ptolemy, who compiled much of what today is the pseudoscience of astrology.  His earth-centered universe held sway for fifteen hundred years, showing that intellectual brilliance is no guarantee against being dead wrong.

This attitude toward Ptolemy is certainly not uncommon, but its mention in one of the most important pieces of public science communication deserves some mention.  While Ptolemy did write books on astrology (as well as geography, optics, and harmonics), these were far less comprehensive and influential than his treatise on astronomy (or his work on Geography, for that matter).  As such, it is most likely Ptolemy’s Almagest, which describes the geocentric “Ptolemaic” model of the solar system that earned him the place of the villain in Sagan’s list of ancient scholars.   Of course, Sagan’s rebuke makes Ptolemy sound less like a scholar than like a sorcerer, fervently penning lies in some dark tower.  How did an astronomer become so reviled almost two thousand years after his death?  After all, no one would refer to Newton as a failed alchemist who was unable to grasp general relativity.  Since we know relatively little about Ptolemy’s life, it’s probably more productive to look for the answer at the end, rather than the beginning, of his astronomical model’s millennium-spanning reign.

While the Ptolemaic system was the dominant scientific model for understanding the universe for hundreds of years, it is mostly known today as the unscientific model from the Galileo affair.  The standard version of the story, which I remember being drilled into me since elementary school, is simply that Galileo was persecuted by the Church for teaching science that contradicted its own dogmatic view of the universe.  On one side, we have Galileo, science, and Truth.  On the other, we have the Church, the Pope, and their dogmatic geocentric theory.  Ptolemy of course, had been dead for centuries, but is generally judged as guilty as the rest of the anti-science camp, as if he had been briefly brought back to life merely to stand with Galileo’s other accusers.

And really, how could he not be guilty?  The Galileo affair is a rhetorical hammer of the finest quality.  The Galileo affair, whether it is discussed amongst particle physicists or in my fourth grade science class, has transcended its status as a mere historical event and become a legend.  To oppose Galileo is to oppose reason itself.  Tradition, authority, vox populi – all of these things conspired against him, but in the end, he triumphed.  Those who stood against him, even metaphorically, were proved to be “dead wrong.”  It should be no surprise, then, that this hammer is wielded quite liberally by groups that are both extremist and unpopular.  Ironically, the groups that fall into this category are more often than not decidedly antiscience.

DialogusThe tension between science and antiscience is one that comes up frequently in the sociology of science.  Indeed, the mere suggestion by sociologists that science was influenced by cultural, political, or economic factors ignited the “Science Wars” of the 1990s, during which many of these sociologists were labeled as being antiscience or anti-intellectual.  While these debates could certainly get quite heated, they generally stayed within academic communities.  A more concerning development for most sociologists was when they saw their arguments being appropriated by “conspiracy groups” seeking to take areas of scientific consensus and disrupt them with manufactured debate.

The most famous response to the latter of these two issues is Bruno Latour’s essay “Why has Critique Run out of Steam?” in which he laments the use of what he sees as critical tools by groups such as climate change deniers and 9/11 conspiracy theorists.  He argues that much of the blame can be put upon critical theorists themselves for creating a false dichotomy between “facts” and “fairies.”  The things they disagree with are treated as “fairies,” imaginary social constructions to which people attribute a power they don’t possess, while the ideas they agree with are treated as “fact,” real things that have real consequences in the world:

This is why you can be at once and without even sensing any contradiction (1) an antifetishist for everything you don’t believe in—for the most part religion, popular culture, art, politics, and so on; (2) an unrepentant positivist for all the sciences you believe in—sociology, economics, conspiracy theory, genetics, evolutionary psychology, semiotics, just pick your preferred field of study; and (3) a perfectly healthy sturdy realist for what you really cherish—and of course it might be criticism itself, but also painting, bird-watching, Shakespeare, baboons, proteins, and so on.1

As most matters of real concern don’t generally fit nicely into either category, this opens up the possibility of groups trying to use whichever approach best serve their own interests.  At the same time, Latour sees this as one of the reasons that Science Studies is such an important field.  While many critics critique systems of oppression that they hate, those in the field of science studies are strong believers in their object of study, despite claims to the contrary.  As Latour notes, “the question was never to get away from facts, but closer to them.”

While Latour’s concern was people placing important ideas like global warming in the fairy category, Ptolemy bashing can be seen as an example of the opposite – taking the complicated and nuanced Galileo affair and rendering it an unquestionable historical fact.  In either case, essentializing seems to do little to help the cause of science or quell the conspiracy theorists.  Instead, what if we try to get closer to the facts through critique, as Latour suggests?

The most conspicuously opaque part of this puzzle is, of course, the legendary story of Galileo’s battle with the church.  Immediately, certain inconsistencies become apparent when we look closer.  For one, some of Galileo’s fiercest opponents were not clergy, but astronomers and other scientists, such as Magini and Ludovico delle Colombe.  Likewise, Galileo consulted a number of cardinals and other Church officials in his attempts to promote Copernican astronomy.  Indeed, some scholars have argued that the Galileo affair had less to do with astronomy than with the politics of the Catholic Church.2

And what of Ptolemy?  Was his astronomical model just a millennium-long dalliance into the realm of pseudoscience?  While this is not an uncommon explanation, it assumes a very linear and cumulative model of science.  Science historian Thomas Kuhn has argued that science is not cumulative, but rather operates under paradigms or specific schools of thought.  When new empirical data throws a paradigm into crisis, scientists must shift to a new paradigm that can adequately explain this data.  As Kuhn notes, the first person to suggest a heliocentric model of the solar system was not Copernicus, but Aristarchus of Samos, who lived three centuries before Ptolemy.  Although his model was ultimately shown to be more accurate, there was no reason to abandon the simpler geocentric paradigm at the time, because while heliocentrism was plausible, it didn’t explain current scientific data any more accurately than geocentrism.  By the time of Copernicus, however, the Ptolemaic system was already in crisis, and the stage was set for a scientific revolution.3

Tychonic SystemWhile the Ptolemaic system was certainly incorrect, that doesn’t mean it wasn’t useful, nor was the Copernican system entirely correct by current standards.  While Copernicus placed the sun at the center of the solar system, he still thought of the planets not as bodies hurling through space but as parts of great celestial spheres, rotating in place.  It was not until Tycho Brahe, a geocentrist, that the idea of immutable celestial orbs was challenged and not until Johannes Kepler that planets were seen as having orbits, rather than being part of an orb.4 Indeed, geocentric models like Brahe’s Tychonic System would not be completely abandoned by scientists for several hundred years after Galileo.  Forming the basis of hundreds of years of productive scientific work isn’t exactly what I’d call “dead wrong.”

So how does this analysis of astrophysical debates shed light on the antiscience debates of today? Of course, these debates can still be compared to the Galileo affair, but if we understand the affair as a deeply political situation that occurred in the context of a crumbling scientific paradigm, we get a different reading of our current plight. Debates about global warming, for instance, are certainly political, but they’re not happening in the midst of a scientific crisis. The current paradigm of human-influenced climate change does a pretty good job of explaining what’s happening. If anything, the comparison paints detractors not as modern Galileos, but modern Ludovicos, trying desperately to resist a momentous discovery that threatens their power.

Likewise, understanding that science has a historical, cultural, and political context doesn’t weaken scientific thought.  It does, however, make artificial debate about scientific theories being “not settled” seem rather silly.  A handful of scientists opposing a stable paradigm isn’t a scientific crisis and it certainly isn’t unusual.  Refusing to act on scientific knowledge until it stands unchallenged makes about as much sense as waiting to move out of oncoming traffic until you can feel the cars.

If our aim is to further the cause of science and quell its detractors, dividing scientists into immaculate heroes and devious villains is probably not the most productive way to go about it.  I would rather we understand science in all its gritty details so that we can better understand how to get through the gritty details that stand in our way today.

 

References

1. Why has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.  Bruno Latour.
2. Galileo and the Council of Trent:  The Galileo Affair Revisited.  Olaf Pedersen.
3. The Structure of Scientific Revolutions.  Thomas Kuhn.
4. Kepler’s Move from Orbs to Orbits: Documenting a Revolutionary Scientific Concept. Bernard Goldstein and Giora Hon.

Coding Ethical Codes

As most sane people will tell you, videogames are quite different from real life.  Stepping into a virtual world means accepting that you are entering a space where the normal rules are temporarily suspended in favor of the game’s rules.  Huizinga calls this the “magic circle.”1 Thus, a moral person may do seemingly immoral things such as killing or stealing in the context of a game because exploring these issues is often the whole point of the game.  The way in which the rules of these virtual worlds are designed has a major impact on the overall experience of playing the game.

Often, the ethics of a virtual world are simply enforced upon the player by limiting certain kinds of interaction. In The Legend of Zelda, for example, the hero can (and often must) kill all variety of creatures with his sword and other weapons.  Other actions, such as killing a shopkeep or stealing his goods are not allowed.  The hero can swing his sword at him, but it simply passes right through him.  Killing innocent merchants is unheroic, therefore, the hero cannot do it, even if he tries.  Other games are less explicit with questions of ethics.  Villagers in Minecraft sell helpful items to the player, much like the merchants in Zelda.  They are also frequently in need of assistance to save their villages from the same monsters that the player has to deal with.  Although there is an implicit hero role for the player to fill, she needn’t actually help the villagers and is perfectly capable of killing them herself.  While playing the hero and having a flourishing village near her castle might be more advantageous, the player is just as free to take on the role of a murderous warlord, leaving a trail of lifeless ruins in her wake.

Still other games take a middle route, allowing the player to make a wide range of actions, but providing an in-game ethical code to guide and evaluate the player’s actions.  These can be simple good-versus-evil meters, such as karma in Fallout 3 and the light-versus-dark mechanics in various Star Wars games, or they can be more complex, such as the eight virtues in Ultima 4.  In either case, such a system requires game developers to assign a certain moral value to the potential actions that the player can perform in the game.  The way that developers choose to define ethical behavior within the game world has a significant impact on the overall experience of playing the game.

One of my favorite games to deal explicitly with these kinds of ethical mechanics is Introversion Software’s Uplink.  Although never reaching mainstream success, Uplink is one of the most iconic games ever created about computer hacking.  The game puts the player in the role of an elite hacker, specializing in corporate espionage.  By completing jobs, the player can earn money to upgrade her software and hardware in order to defeat increasingly complex security systems.  As one might suspect, essentially everything the player does in the game is framed as being illegal within the context of the game world.  The game, however, provides an alternate ethical system in the form of the player’s “Neuromancer rating.”  This rating, which changes over time as the player completes missions, purports to evaluate the player’s actions not on their legality or their conformity to broader societal ideals, but on how these actions conform to the ideals of the hacker community.

Scholars such as Steven Levy have noted that hackers do, in fact, tend to have strong commitments to ethical standards that differ somewhat from those of society at large.  This “hacker ethic” is based upon ideals that access to computers and information should be unrestricted and universal2.  This often brings them into conflict with other organizations over matters such as copyright, where freedom of information is restricted in favor of other societal values that are deemed more important.

Playing Uplink with an understanding of the hacker ethic, however, the player may find that the Neuromancer rating seems somewhat arbitrary and unpredictable.  While it seems appropriate that stealing research information from one company and sharing it with their competitors would improve your ethical standing, it would reason that the opposite, destroying information to prevent anyone from benefiting from it, would be bad.  Surprisingly, both of these acts improve your Neuromancer rating, whether you are making information more freely available or not.  With the ethical implications of individual missions being difficult to determine without considerable amounts of trial and error, the Neruomancer rating serves very poorly as a moral compass.

In actuality, Neuromancer ratings have little to do with your actions themselves, but instead focus on the target of those actions.  Any attack you perform on a corporation boosts the player’s Neuromancer rating.  Any attack that targets an individual or another hacker drops it.  This system is problematic for a number of reasons.  First, it implies a strict “us versus them” relationship between hackers and corporations, which is an overly simplistic (though not entirely uncommon) view of hacker culture.  While the two are often at odds, this conflict is a result of conflicting ethical systems, rather than a core tenet of hacker ethics.

Additionally, the Neuromancer system lacks internal consistency.  While helping to track down another hacker is considered unethical, so is helping another hacker by creating fake credentials.  Clearing a criminal record, on the other hand, seems to be considered good, even though it targets an individual in a similar manner to the “fake credentials” mission.  In the end, there is little way to determine the ethicality of an action by applying a general ethical framework.  The player can do little else but try each mission and attempt to divine its effect on her Neuromancer rating.

While I find the idea of the Neuromancer rating to be quite intriguing, its implementation in Uplink creates a problematic ethical system that is neither useful to the player, nor representative of the hacker community.  To the developers’ credit, I actually find that the player can find a more interesting and nuanced view of hacker ethics by ignoring the primary ethical mechanic in the game and simply paying more attention to the small bits of static narrative that are inserted throughout the length of the game.

References

1. Homo Ludens: A Study of the Play-element in Culture. Johan Huizinga.
2. Hackers: Heroes of the Computer Revolution. Steven Levy.

The Beginning and End of the Internet

A few weeks ago, I was able to attend the 2013 Frontiers of New Media Symposium here at the University of Utah. Due to growing concerns over censorship and surveillance in the wake of the recent NSA scandal, the symposium, which looks at Utah and the American West as both a geographical and technological frontier, took on a somewhat more pessimistic tone than in previous years.  Its very apt theme, “The Beginning and End(s) of the Internet,” refers to Utah as being both one of the original four nodes on the ARPANET and being the future home of the NSA’s new data storage center, which, as a number of presenters pointed out, may indeed be part of the end of the open Internet as we know it.

While the main themes of the symposium were censorship and surveillance, the various presenters connected these themes to a much broader scholarship.  As always, the importance of physical geography to online interactions was a frequently discussed topic.  With the knowledge that the NSA has extensive ability to surveil any data that is stored or even passes through the United States, discourse that envisions “the Cloud” as an ethereal, placeless entity becomes very problematic (or at least more so than it already was).  There was also considerable discussion of the ways in which the Internet and online behaviors have changed since the early days of the Web.  While there was not always consensus on how accurate our vision of the early Internet as a completely free and open network was, everyone seemed to agree that a great deal has changed, perhaps irrevocably.  As Geert Lovink pointed out in his keynote, people no longer “surf” the Internet.  They are guided through it by social media.  Additionally, the user base of the Internet is shifting away from the United States and the West.  Brazil is trying to create its own Internet, while oppressive governments around the world are essentially trying to do the same thing that the NSA has apparently been doing for years.

With so many weighty matters on the table, there was certainly a lot to take away.  One theme that stood out to me, however, was the importance of understanding code in relationship to Internet participation.  This might seem counterintuitive to most current discourse surrounding the Internet, as technologies like Dreamweaver, WordPress, and Wikis are lauded for their ability to lower the barriers of entry to Internet participation and almost completely remove the need to learn code at all.  Why does anyone need to learn HTML in this day and age?

One of the topics that I always try to stress in my classes is that of media literacy.  In most popular discourse about Internet literacy, there is an inherent assumption that kids who grew up after the birth of the World Wide Web are endowed with this ability from an early age, generally surpassing their parents and older siblings by nature of being “born into” the Internet age.  Internet literacy, however is much more than the ability to open up a browser and update your Facebook page.  Literacy is not simply passing the minimum bar for using a medium.  It also involves being able to create and understand messages in that medium.  While basic ability to use the Internet is becoming more and more widespread, the prevalence of phenomena such as cyberbullying, phishing, and even trolling demonstrate significant and potentially dangerous deficiencies in many people’s understanding of how the Internet works.

While technologies such as Facebook, Instagram, and Wikipedia give many users the ability to become content creators, this ability comes at a price.  In exchange for ease of use, these technologies dramatically limit the kind of content that can be created, as well as the social and legal contexts in which this content can exist.  Lovink referred to these restrictions as a loss of “cyber-individualism,” where the personal home page has been replaced by the wiki and the social network.

These changes were perhaps best illustrated by Alice Marwick’s presentation on the transition from print zines of the 1980s to the modern blog.  Print zines, which were generally small, handmade magazines created on photocopiers, embodied the punk rock ethic of do-it-yourself culture.  Zines were most often had a subversive or counter-cultural aesthetic, and many zines had a strong feminist thread running through them.  Many of these same values and aesthetics migrated to early web zines and personal homepages, spawning a number of feminist webrings (remember those?).  However, as homemade HTML pages slowly gave way to blogging software and other accessible content management and hosting services, the nature of these publications changed.  While zines and many of the early text files that were passed around the Internet were largely anonymous, true anonymity on the Internet had become difficult long before the NSA started undermining online security.  Most bloggers don’t host their own sites, which means that their content has to comply with the rules of Blogger, LiveJournal, Tumblr, or whatever other service they might be using.  These rules are in turn shaped by legal and economic factors that rarely shift in favor of small content creators.  The subversive counter-culture, feminist, and anti-capitalist themes that were a significant part of zines as a medium are rarely found in the modern blogosphere.

Fortunately, there are still small but determined groups that still embrace the do-it-yourself ethic as a means of self-expression.  In the field of videogames, some of the most interesting (though often overlooked) games are dubbed “scratchware” after the Scratchware Manifesto, which condemned many of the destructive industry practices that arose in the 1990s and demanded a revolution.  Though the videogame industry itself has done little to address most of the issues brought up in the Manifesto, a number of talented creators have taken its ideas to heart, including Anna Anthropy, author of Rise of the Videogame Zinesters and developer behind games such as Mighty Jill Off, dys4ia, and Lesbian Spider-Queens of Mars.  Although much of what was zine culture has been absorbed into the more homogeneous mainstream, there are still people out there bringing the zine into new media.

Even if you don’t necessarily fall into the counter-cultural zinester category, if you participate in Internet culture, it is important to learn about code, even if all you learn is HTML.  If not, you will always be limited to participating on someone else’s turns, whether it be Twitter, Google, or Facebook (all of which, at this point, might as well be the NSA).  Sites that offer introductory courses in coding like W3Schools and Codecademy have made it so that the only excuse for not knowing how to code is simply not trying.

While it may not be possible to return to the (possibly mythical) beginnings of the Internet, it needn’t end here.  Anyone can code.  Anyone can hack.  Clouds can be dismantled.  Censorship can be circumvented.  It’s time to take the Internet back.

Not a Game

Over the last few months, an increasing number of people have started to ask my opinion (or let me know all about theirs) regarding the boundaries of videogames as a medium.  More often than not, this means affirming someone’s statement that “X is not a game.”  Indeed, there seems to be a recent push to draw a line in the sand, with games on one side and “non-games” on the other.  Nintendo president Satoru Iwata applied this term to games like Animal Crossing which don’t “have a winner, or even a real conclusion.”   Certainly, Iwata is not the first person to attempt to place less conventional games into a new category.  As I have mentioned before, Maxis referred to a number of their products such as SimAnt not as games, but as “software toys.”  Recently, however, it seems that people have taken this as a call to erect hard and fast barriers between these different categories, lest some unsuspecting fool accidentally mistake something for a game that clearly is not.

These attempts to define games (though “redefine” would be much more accurate description of this process) are based on a number of rather dubious assumptions.  First and foremost is the assumption that this new, more narrow definition of what a game is somehow more accurately captures what the word “game” means.  However, this seemingly cut-and-dry definition of a game has little resemblance to its historical usage, nor to the act of playing games.  Games have never been limited to only those activities in which there is a clear winner.  To assume such a thing is not only presentist, but is also a rather modernist, male-centric way of looking at games.

So what do I mean by all these academic sounding words?

Basically, by declaring that “games have winners and this is the way it’s always been,” we automatically invalidate vast amounts of traditional activities normally categorized as games, especially games that are traditionally associated with girls.  Games like playing house, jumping rope, and clapping games have long been important parts of children’s culture.  Some of these, like clapping games, even share many similarities with modern cooperative games.  There is a fairly clear goal (completing the clapping pattern) that requires both players to perform a task with a considerable amount of skill.  There is also a very clear failure state when one of the players misses a beat.  Indeed, if we begin to break down such games to their basic rules, stripping away the layers of gendered meaning, there are very few fundamental differences between clapping games and Guitar Hero except that one is digitally mediated and the other is not.

Playing House with MarioWhile there are clearly issues with both historical fidelity and gender politics in many of the ways people try to draw boundaries between “games” and “non-games,” it’s not just the way we draw lines that’s problematic, it’s the fact that we’re trying to draw lines at all.  The idea that everything can be accurately categorized and placed into a grand, cohesive narrative is one of the hallmarks of modern thought.  As someone who has a significant leaning toward the postmodern, I find any hard-and-fast definition of “what is and what is not a game” hard to reconcile with reality.

Let’s say for the sake of argument that we want to enforce a strict definition of “games” and “non-games.”  We can use Iwata’s criteria of having both a winner and a real conclusion.  A winner can be either one player beating another, or one player overcoming the game.  A conclusion is a bit more vague, but generally encompasses some degree of finality.  You have achieved the main objective of the game and can consider your experience somehow completed.  This does not necessarily mean that there is nothing left to do in the game, but there is a sense of arriving at some agreed upon end point.

Naturally, many games fit into this mold rather nicely.  Super Mario Bros. is a game because you can win by reaching the princess and although she gives you a new, harder quest, it is really just replaying the game with a slight variation.  Finding the princess is easily identified as a conclusion to the overarching game narrative.  Likewise, Metroid, Mega Man, and many other games have a clear win state and easily identifiable conclusion, often explicitly labeled as “The End,” after the style of old movies.

TetrisOther “games” don’t fit so well into this definition.  Tetris, in many of its incarnations, has no way to win (though as my sister once demonstrated to my surprise, the NES version actually reaches a sort of win state where little dancers rush out on the stage and do a victory dance).  It is certainly easy to lose, but no matter how long it goes on, the blocks just keep coming.  If you can’t win at Tetris, what’s the point of playing?  Is it to beat your high score?  Is it to beat your friends’ high scores?  Is it just to play around for fun for as long as you feel like it?  That doesn’t sound our hard definition of a game.  What about other notable games like World of Warcraft?  How do you beat WoW?  Is it merely a question of defeating all the bosses?  Is it getting your character up to the maximum level cap?  Can you only beat the game if you complete every achievement ever?  Does that mean that World of Warcraft is a game that only one person has ever won?  While there is certainly some logic in each of these definitions, none of them really seem to encompass what WoW is all about.  Some of the main motivations for playing the game are social interactions with other players and self-expression through creating and advancing an avatar.  Does that make World of Warcraft a sandboxy non-game like Minecraft or even…Second Life?

Any kind of definition for games that potentially leaves out such noteworthy titles as Tetris and World of Warcraft should immediately raise a few red flags.  While you could argue for a more precise definition than Iwata’s, adding more and more levels of nuance is unlikely to unmuddy the waters.  Can you draw a clear line between a very linear game and an interactive story?  Can you identify the key differences between a game like Warcraft III and a non-game like SimAnt?  Can you say that a bunch of college guys tapping on plastic guitars are playing a game, while a bunch of grade school girls clapping their hands are not?

Names and categories are important for the way we create meaning and in general, the term “game” is a fairly good category (the term “videogame” can be a bit more trouble, but that’s a different debate).  I am clearly more interested in studying games than agriculture.  I would even consent that certain things are more “gamey” than others.  I also have no objections if developers want to call the thing they create “software toys” or “non-games” if that is how they want to situate themselves in relationship to the existing games ecosystem.  Where I take issue is when one group sees it as proper and necessary to deny the use of a word for some object to which it can certainly be applicable.  I consider this to be an example of what Donna Haraway terms (quoting Watson-Verran) “a hardening of the categories.”1 Words and meanings are not static, neither are they apolitical.  Telling someone that something is “not a game” is not a matter of fact.  It is a value-laden assertion.

References

1. Modest_Witness@Second_Millenium.FemaleMan_Meets_OncoMouse. Donna Haraway.