On Ptolemy Bashing

PtolemyIt’s hard to discuss science communication without mentioning Carl Sagan.  To this day, he remains the quintessential example of the public intellectual.  Few, if any, examples of science communication have ever been as successful at connecting scientists and the general public as Sagan’s Cosmos: A Personal Voyage.  It took complex scientific knowledge and presented it in a way that was both accessible to viewers, as well as compelling and enjoyable to watch.  Through Cosmos, Sagan influenced not only the way we think about our place in the universe, but the way that we understand science itself.

During my most recent viewing of Cosmos, something stood out to me.  In the first episode, “The Shores of the Cosmic Ocean,” there is a scene where Sagan contemplates the Library of Alexandria and the scholars whose knowledge might have been contained therein.  He mentions the wonderful achievements of a number of great scientists before ending on a slightly odd note:

…and there was the astronomer Ptolemy, who compiled much of what today is the pseudoscience of astrology.  His earth-centered universe held sway for fifteen hundred years, showing that intellectual brilliance is no guarantee against being dead wrong.

This attitude toward Ptolemy is certainly not uncommon, but its mention in one of the most important pieces of public science communication deserves some mention.  While Ptolemy did write books on astrology (as well as geography, optics, and harmonics), these were far less comprehensive and influential than his treatise on astronomy (or his work on Geography, for that matter).  As such, it is most likely Ptolemy’s Almagest, which describes the geocentric “Ptolemaic” model of the solar system that earned him the place of the villain in Sagan’s list of ancient scholars.   Of course, Sagan’s rebuke makes Ptolemy sound less like a scholar than like a sorcerer, fervently penning lies in some dark tower.  How did an astronomer become so reviled almost two thousand years after his death?  After all, no one would refer to Newton as a failed alchemist who was unable to grasp general relativity.  Since we know relatively little about Ptolemy’s life, it’s probably more productive to look for the answer at the end, rather than the beginning, of his astronomical model’s millennium-spanning reign.

While the Ptolemaic system was the dominant scientific model for understanding the universe for hundreds of years, it is mostly known today as the unscientific model from the Galileo affair.  The standard version of the story, which I remember being drilled into me since elementary school, is simply that Galileo was persecuted by the Church for teaching science that contradicted its own dogmatic view of the universe.  On one side, we have Galileo, science, and Truth.  On the other, we have the Church, the Pope, and their dogmatic geocentric theory.  Ptolemy of course, had been dead for centuries, but is generally judged as guilty as the rest of the anti-science camp, as if he had been briefly brought back to life merely to stand with Galileo’s other accusers.

And really, how could he not be guilty?  The Galileo affair is a rhetorical hammer of the finest quality.  The Galileo affair, whether it is discussed amongst particle physicists or in my fourth grade science class, has transcended its status as a mere historical event and become a legend.  To oppose Galileo is to oppose reason itself.  Tradition, authority, vox populi – all of these things conspired against him, but in the end, he triumphed.  Those who stood against him, even metaphorically, were proved to be “dead wrong.”  It should be no surprise, then, that this hammer is wielded quite liberally by groups that are both extremist and unpopular.  Ironically, the groups that fall into this category are more often than not decidedly antiscience.

DialogusThe tension between science and antiscience is one that comes up frequently in the sociology of science.  Indeed, the mere suggestion by sociologists that science was influenced by cultural, political, or economic factors ignited the “Science Wars” of the 1990s, during which many of these sociologists were labeled as being antiscience or anti-intellectual.  While these debates could certainly get quite heated, they generally stayed within academic communities.  A more concerning development for most sociologists was when they saw their arguments being appropriated by “conspiracy groups” seeking to take areas of scientific consensus and disrupt them with manufactured debate.

The most famous response to the latter of these two issues is Bruno Latour’s essay “Why has Critique Run out of Steam?” in which he laments the use of what he sees as critical tools by groups such as climate change deniers and 9/11 conspiracy theorists.  He argues that much of the blame can be put upon critical theorists themselves for creating a false dichotomy between “facts” and “fairies.”  The things they disagree with are treated as “fairies,” imaginary social constructions to which people attribute a power they don’t possess, while the ideas they agree with are treated as “fact,” real things that have real consequences in the world:

This is why you can be at once and without even sensing any contradiction (1) an antifetishist for everything you don’t believe in—for the most part religion, popular culture, art, politics, and so on; (2) an unrepentant positivist for all the sciences you believe in—sociology, economics, conspiracy theory, genetics, evolutionary psychology, semiotics, just pick your preferred field of study; and (3) a perfectly healthy sturdy realist for what you really cherish—and of course it might be criticism itself, but also painting, bird-watching, Shakespeare, baboons, proteins, and so on.1

As most matters of real concern don’t generally fit nicely into either category, this opens up the possibility of groups trying to use whichever approach best serve their own interests.  At the same time, Latour sees this as one of the reasons that Science Studies is such an important field.  While many critics critique systems of oppression that they hate, those in the field of science studies are strong believers in their object of study, despite claims to the contrary.  As Latour notes, “the question was never to get away from facts, but closer to them.”

While Latour’s concern was people placing important ideas like global warming in the fairy category, Ptolemy bashing can be seen as an example of the opposite – taking the complicated and nuanced Galileo affair and rendering it an unquestionable historical fact.  In either case, essentializing seems to do little to help the cause of science or quell the conspiracy theorists.  Instead, what if we try to get closer to the facts through critique, as Latour suggests?

The most conspicuously opaque part of this puzzle is, of course, the legendary story of Galileo’s battle with the church.  Immediately, certain inconsistencies become apparent when we look closer.  For one, some of Galileo’s fiercest opponents were not clergy, but astronomers and other scientists, such as Magini and Ludovico delle Colombe.  Likewise, Galileo consulted a number of cardinals and other Church officials in his attempts to promote Copernican astronomy.  Indeed, some scholars have argued that the Galileo affair had less to do with astronomy than with the politics of the Catholic Church.2

And what of Ptolemy?  Was his astronomical model just a millennium-long dalliance into the realm of pseudoscience?  While this is not an uncommon explanation, it assumes a very linear and cumulative model of science.  Science historian Thomas Kuhn has argued that science is not cumulative, but rather operates under paradigms or specific schools of thought.  When new empirical data throws a paradigm into crisis, scientists must shift to a new paradigm that can adequately explain this data.  As Kuhn notes, the first person to suggest a heliocentric model of the solar system was not Copernicus, but Aristarchus of Samos, who lived three centuries before Ptolemy.  Although his model was ultimately shown to be more accurate, there was no reason to abandon the simpler geocentric paradigm at the time, because while heliocentrism was plausible, it didn’t explain current scientific data any more accurately than geocentrism.  By the time of Copernicus, however, the Ptolemaic system was already in crisis, and the stage was set for a scientific revolution.3

Tychonic SystemWhile the Ptolemaic system was certainly incorrect, that doesn’t mean it wasn’t useful, nor was the Copernican system entirely correct by current standards.  While Copernicus placed the sun at the center of the solar system, he still thought of the planets not as bodies hurling through space but as parts of great celestial spheres, rotating in place.  It was not until Tycho Brahe, a geocentrist, that the idea of immutable celestial orbs was challenged and not until Johannes Kepler that planets were seen as having orbits, rather than being part of an orb.4 Indeed, geocentric models like Brahe’s Tychonic System would not be completely abandoned by scientists for several hundred years after Galileo.  Forming the basis of hundreds of years of productive scientific work isn’t exactly what I’d call “dead wrong.”

So how does this analysis of astrophysical debates shed light on the antiscience debates of today? Of course, these debates can still be compared to the Galileo affair, but if we understand the affair as a deeply political situation that occurred in the context of a crumbling scientific paradigm, we get a different reading of our current plight. Debates about global warming, for instance, are certainly political, but they’re not happening in the midst of a scientific crisis. The current paradigm of human-influenced climate change does a pretty good job of explaining what’s happening. If anything, the comparison paints detractors not as modern Galileos, but modern Ludovicos, trying desperately to resist a momentous discovery that threatens their power.

Likewise, understanding that science has a historical, cultural, and political context doesn’t weaken scientific thought.  It does, however, make artificial debate about scientific theories being “not settled” seem rather silly.  A handful of scientists opposing a stable paradigm isn’t a scientific crisis and it certainly isn’t unusual.  Refusing to act on scientific knowledge until it stands unchallenged makes about as much sense as waiting to move out of oncoming traffic until you can feel the cars.

If our aim is to further the cause of science and quell its detractors, dividing scientists into immaculate heroes and devious villains is probably not the most productive way to go about it.  I would rather we understand science in all its gritty details so that we can better understand how to get through the gritty details that stand in our way today.

 

References

1. Why has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.  Bruno Latour.
2. Galileo and the Council of Trent:  The Galileo Affair Revisited.  Olaf Pedersen.
3. The Structure of Scientific Revolutions.  Thomas Kuhn.
4. Kepler’s Move from Orbs to Orbits: Documenting a Revolutionary Scientific Concept. Bernard Goldstein and Giora Hon.

Coding Ethical Codes

As most sane people will tell you, videogames are quite different from real life.  Stepping into a virtual world means accepting that you are entering a space where the normal rules are temporarily suspended in favor of the game’s rules.  Huizinga calls this the “magic circle.”1 Thus, a moral person may do seemingly immoral things such as killing or stealing in the context of a game because exploring these issues is often the whole point of the game.  The way in which the rules of these virtual worlds are designed has a major impact on the overall experience of playing the game.

Often, the ethics of a virtual world are simply enforced upon the player by limiting certain kinds of interaction. In The Legend of Zelda, for example, the hero can (and often must) kill all variety of creatures with his sword and other weapons.  Other actions, such as killing a shopkeep or stealing his goods are not allowed.  The hero can swing his sword at him, but it simply passes right through him.  Killing innocent merchants is unheroic, therefore, the hero cannot do it, even if he tries.  Other games are less explicit with questions of ethics.  Villagers in Minecraft sell helpful items to the player, much like the merchants in Zelda.  They are also frequently in need of assistance to save their villages from the same monsters that the player has to deal with.  Although there is an implicit hero role for the player to fill, she needn’t actually help the villagers and is perfectly capable of killing them herself.  While playing the hero and having a flourishing village near her castle might be more advantageous, the player is just as free to take on the role of a murderous warlord, leaving a trail of lifeless ruins in her wake.

Still other games take a middle route, allowing the player to make a wide range of actions, but providing an in-game ethical code to guide and evaluate the player’s actions.  These can be simple good-versus-evil meters, such as karma in Fallout 3 and the light-versus-dark mechanics in various Star Wars games, or they can be more complex, such as the eight virtues in Ultima 4.  In either case, such a system requires game developers to assign a certain moral value to the potential actions that the player can perform in the game.  The way that developers choose to define ethical behavior within the game world has a significant impact on the overall experience of playing the game.

One of my favorite games to deal explicitly with these kinds of ethical mechanics is Introversion Software’s Uplink.  Although never reaching mainstream success, Uplink is one of the most iconic games ever created about computer hacking.  The game puts the player in the role of an elite hacker, specializing in corporate espionage.  By completing jobs, the player can earn money to upgrade her software and hardware in order to defeat increasingly complex security systems.  As one might suspect, essentially everything the player does in the game is framed as being illegal within the context of the game world.  The game, however, provides an alternate ethical system in the form of the player’s “Neuromancer rating.”  This rating, which changes over time as the player completes missions, purports to evaluate the player’s actions not on their legality or their conformity to broader societal ideals, but on how these actions conform to the ideals of the hacker community.

Scholars such as Steven Levy have noted that hackers do, in fact, tend to have strong commitments to ethical standards that differ somewhat from those of society at large.  This “hacker ethic” is based upon ideals that access to computers and information should be unrestricted and universal2.  This often brings them into conflict with other organizations over matters such as copyright, where freedom of information is restricted in favor of other societal values that are deemed more important.

Playing Uplink with an understanding of the hacker ethic, however, the player may find that the Neuromancer rating seems somewhat arbitrary and unpredictable.  While it seems appropriate that stealing research information from one company and sharing it with their competitors would improve your ethical standing, it would reason that the opposite, destroying information to prevent anyone from benefiting from it, would be bad.  Surprisingly, both of these acts improve your Neuromancer rating, whether you are making information more freely available or not.  With the ethical implications of individual missions being difficult to determine without considerable amounts of trial and error, the Neruomancer rating serves very poorly as a moral compass.

In actuality, Neuromancer ratings have little to do with your actions themselves, but instead focus on the target of those actions.  Any attack you perform on a corporation boosts the player’s Neuromancer rating.  Any attack that targets an individual or another hacker drops it.  This system is problematic for a number of reasons.  First, it implies a strict “us versus them” relationship between hackers and corporations, which is an overly simplistic (though not entirely uncommon) view of hacker culture.  While the two are often at odds, this conflict is a result of conflicting ethical systems, rather than a core tenet of hacker ethics.

Additionally, the Neuromancer system lacks internal consistency.  While helping to track down another hacker is considered unethical, so is helping another hacker by creating fake credentials.  Clearing a criminal record, on the other hand, seems to be considered good, even though it targets an individual in a similar manner to the “fake credentials” mission.  In the end, there is little way to determine the ethicality of an action by applying a general ethical framework.  The player can do little else but try each mission and attempt to divine its effect on her Neuromancer rating.

While I find the idea of the Neuromancer rating to be quite intriguing, its implementation in Uplink creates a problematic ethical system that is neither useful to the player, nor representative of the hacker community.  To the developers’ credit, I actually find that the player can find a more interesting and nuanced view of hacker ethics by ignoring the primary ethical mechanic in the game and simply paying more attention to the small bits of static narrative that are inserted throughout the length of the game.

References

1. Homo Ludens: A Study of the Play-element in Culture. Johan Huizinga.
2. Hackers: Heroes of the Computer Revolution. Steven Levy.

The Beginning and End of the Internet

A few weeks ago, I was able to attend the 2013 Frontiers of New Media Symposium here at the University of Utah. Due to growing concerns over censorship and surveillance in the wake of the recent NSA scandal, the symposium, which looks at Utah and the American West as both a geographical and technological frontier, took on a somewhat more pessimistic tone than in previous years.  Its very apt theme, “The Beginning and End(s) of the Internet,” refers to Utah as being both one of the original four nodes on the ARPANET and being the future home of the NSA’s new data storage center, which, as a number of presenters pointed out, may indeed be part of the end of the open Internet as we know it.

While the main themes of the symposium were censorship and surveillance, the various presenters connected these themes to a much broader scholarship.  As always, the importance of physical geography to online interactions was a frequently discussed topic.  With the knowledge that the NSA has extensive ability to surveil any data that is stored or even passes through the United States, discourse that envisions “the Cloud” as an ethereal, placeless entity becomes very problematic (or at least more so than it already was).  There was also considerable discussion of the ways in which the Internet and online behaviors have changed since the early days of the Web.  While there was not always consensus on how accurate our vision of the early Internet as a completely free and open network was, everyone seemed to agree that a great deal has changed, perhaps irrevocably.  As Geert Lovink pointed out in his keynote, people no longer “surf” the Internet.  They are guided through it by social media.  Additionally, the user base of the Internet is shifting away from the United States and the West.  Brazil is trying to create its own Internet, while oppressive governments around the world are essentially trying to do the same thing that the NSA has apparently been doing for years.

With so many weighty matters on the table, there was certainly a lot to take away.  One theme that stood out to me, however, was the importance of understanding code in relationship to Internet participation.  This might seem counterintuitive to most current discourse surrounding the Internet, as technologies like Dreamweaver, WordPress, and Wikis are lauded for their ability to lower the barriers of entry to Internet participation and almost completely remove the need to learn code at all.  Why does anyone need to learn HTML in this day and age?

One of the topics that I always try to stress in my classes is that of media literacy.  In most popular discourse about Internet literacy, there is an inherent assumption that kids who grew up after the birth of the World Wide Web are endowed with this ability from an early age, generally surpassing their parents and older siblings by nature of being “born into” the Internet age.  Internet literacy, however is much more than the ability to open up a browser and update your Facebook page.  Literacy is not simply passing the minimum bar for using a medium.  It also involves being able to create and understand messages in that medium.  While basic ability to use the Internet is becoming more and more widespread, the prevalence of phenomena such as cyberbullying, phishing, and even trolling demonstrate significant and potentially dangerous deficiencies in many people’s understanding of how the Internet works.

While technologies such as Facebook, Instagram, and Wikipedia give many users the ability to become content creators, this ability comes at a price.  In exchange for ease of use, these technologies dramatically limit the kind of content that can be created, as well as the social and legal contexts in which this content can exist.  Lovink referred to these restrictions as a loss of “cyber-individualism,” where the personal home page has been replaced by the wiki and the social network.

These changes were perhaps best illustrated by Alice Marwick’s presentation on the transition from print zines of the 1980s to the modern blog.  Print zines, which were generally small, handmade magazines created on photocopiers, embodied the punk rock ethic of do-it-yourself culture.  Zines were most often had a subversive or counter-cultural aesthetic, and many zines had a strong feminist thread running through them.  Many of these same values and aesthetics migrated to early web zines and personal homepages, spawning a number of feminist webrings (remember those?).  However, as homemade HTML pages slowly gave way to blogging software and other accessible content management and hosting services, the nature of these publications changed.  While zines and many of the early text files that were passed around the Internet were largely anonymous, true anonymity on the Internet had become difficult long before the NSA started undermining online security.  Most bloggers don’t host their own sites, which means that their content has to comply with the rules of Blogger, LiveJournal, Tumblr, or whatever other service they might be using.  These rules are in turn shaped by legal and economic factors that rarely shift in favor of small content creators.  The subversive counter-culture, feminist, and anti-capitalist themes that were a significant part of zines as a medium are rarely found in the modern blogosphere.

Fortunately, there are still small but determined groups that still embrace the do-it-yourself ethic as a means of self-expression.  In the field of videogames, some of the most interesting (though often overlooked) games are dubbed “scratchware” after the Scratchware Manifesto, which condemned many of the destructive industry practices that arose in the 1990s and demanded a revolution.  Though the videogame industry itself has done little to address most of the issues brought up in the Manifesto, a number of talented creators have taken its ideas to heart, including Anna Anthropy, author of Rise of the Videogame Zinesters and developer behind games such as Mighty Jill Off, dys4ia, and Lesbian Spider-Queens of Mars.  Although much of what was zine culture has been absorbed into the more homogeneous mainstream, there are still people out there bringing the zine into new media.

Even if you don’t necessarily fall into the counter-cultural zinester category, if you participate in Internet culture, it is important to learn about code, even if all you learn is HTML.  If not, you will always be limited to participating on someone else’s turns, whether it be Twitter, Google, or Facebook (all of which, at this point, might as well be the NSA).  Sites that offer introductory courses in coding like W3Schools and Codecademy have made it so that the only excuse for not knowing how to code is simply not trying.

While it may not be possible to return to the (possibly mythical) beginnings of the Internet, it needn’t end here.  Anyone can code.  Anyone can hack.  Clouds can be dismantled.  Censorship can be circumvented.  It’s time to take the Internet back.

Not a Game

Over the last few months, an increasing number of people have started to ask my opinion (or let me know all about theirs) regarding the boundaries of videogames as a medium.  More often than not, this means affirming someone’s statement that “X is not a game.”  Indeed, there seems to be a recent push to draw a line in the sand, with games on one side and “non-games” on the other.  Nintendo president Satoru Iwata applied this term to games like Animal Crossing which don’t “have a winner, or even a real conclusion.”   Certainly, Iwata is not the first person to attempt to place less conventional games into a new category.  As I have mentioned before, Maxis referred to a number of their products such as SimAnt not as games, but as “software toys.”  Recently, however, it seems that people have taken this as a call to erect hard and fast barriers between these different categories, lest some unsuspecting fool accidentally mistake something for a game that clearly is not.

These attempts to define games (though “redefine” would be much more accurate description of this process) are based on a number of rather dubious assumptions.  First and foremost is the assumption that this new, more narrow definition of what a game is somehow more accurately captures what the word “game” means.  However, this seemingly cut-and-dry definition of a game has little resemblance to its historical usage, nor to the act of playing games.  Games have never been limited to only those activities in which there is a clear winner.  To assume such a thing is not only presentist, but is also a rather modernist, male-centric way of looking at games.

So what do I mean by all these academic sounding words?

Basically, by declaring that “games have winners and this is the way it’s always been,” we automatically invalidate vast amounts of traditional activities normally categorized as games, especially games that are traditionally associated with girls.  Games like playing house, jumping rope, and clapping games have long been important parts of children’s culture.  Some of these, like clapping games, even share many similarities with modern cooperative games.  There is a fairly clear goal (completing the clapping pattern) that requires both players to perform a task with a considerable amount of skill.  There is also a very clear failure state when one of the players misses a beat.  Indeed, if we begin to break down such games to their basic rules, stripping away the layers of gendered meaning, there are very few fundamental differences between clapping games and Guitar Hero except that one is digitally mediated and the other is not.

Playing House with MarioWhile there are clearly issues with both historical fidelity and gender politics in many of the ways people try to draw boundaries between “games” and “non-games,” it’s not just the way we draw lines that’s problematic, it’s the fact that we’re trying to draw lines at all.  The idea that everything can be accurately categorized and placed into a grand, cohesive narrative is one of the hallmarks of modern thought.  As someone who has a significant leaning toward the postmodern, I find any hard-and-fast definition of “what is and what is not a game” hard to reconcile with reality.

Let’s say for the sake of argument that we want to enforce a strict definition of “games” and “non-games.”  We can use Iwata’s criteria of having both a winner and a real conclusion.  A winner can be either one player beating another, or one player overcoming the game.  A conclusion is a bit more vague, but generally encompasses some degree of finality.  You have achieved the main objective of the game and can consider your experience somehow completed.  This does not necessarily mean that there is nothing left to do in the game, but there is a sense of arriving at some agreed upon end point.

Naturally, many games fit into this mold rather nicely.  Super Mario Bros. is a game because you can win by reaching the princess and although she gives you a new, harder quest, it is really just replaying the game with a slight variation.  Finding the princess is easily identified as a conclusion to the overarching game narrative.  Likewise, Metroid, Mega Man, and many other games have a clear win state and easily identifiable conclusion, often explicitly labeled as “The End,” after the style of old movies.

TetrisOther “games” don’t fit so well into this definition.  Tetris, in many of its incarnations, has no way to win (though as my sister once demonstrated to my surprise, the NES version actually reaches a sort of win state where little dancers rush out on the stage and do a victory dance).  It is certainly easy to lose, but no matter how long it goes on, the blocks just keep coming.  If you can’t win at Tetris, what’s the point of playing?  Is it to beat your high score?  Is it to beat your friends’ high scores?  Is it just to play around for fun for as long as you feel like it?  That doesn’t sound our hard definition of a game.  What about other notable games like World of Warcraft?  How do you beat WoW?  Is it merely a question of defeating all the bosses?  Is it getting your character up to the maximum level cap?  Can you only beat the game if you complete every achievement ever?  Does that mean that World of Warcraft is a game that only one person has ever won?  While there is certainly some logic in each of these definitions, none of them really seem to encompass what WoW is all about.  Some of the main motivations for playing the game are social interactions with other players and self-expression through creating and advancing an avatar.  Does that make World of Warcraft a sandboxy non-game like Minecraft or even…Second Life?

Any kind of definition for games that potentially leaves out such noteworthy titles as Tetris and World of Warcraft should immediately raise a few red flags.  While you could argue for a more precise definition than Iwata’s, adding more and more levels of nuance is unlikely to unmuddy the waters.  Can you draw a clear line between a very linear game and an interactive story?  Can you identify the key differences between a game like Warcraft III and a non-game like SimAnt?  Can you say that a bunch of college guys tapping on plastic guitars are playing a game, while a bunch of grade school girls clapping their hands are not?

Names and categories are important for the way we create meaning and in general, the term “game” is a fairly good category (the term “videogame” can be a bit more trouble, but that’s a different debate).  I am clearly more interested in studying games than agriculture.  I would even consent that certain things are more “gamey” than others.  I also have no objections if developers want to call the thing they create “software toys” or “non-games” if that is how they want to situate themselves in relationship to the existing games ecosystem.  Where I take issue is when one group sees it as proper and necessary to deny the use of a word for some object to which it can certainly be applicable.  I consider this to be an example of what Donna Haraway terms (quoting Watson-Verran) “a hardening of the categories.”1 Words and meanings are not static, neither are they apolitical.  Telling someone that something is “not a game” is not a matter of fact.  It is a value-laden assertion.

References

1. Modest_Witness@Second_Millenium.FemaleMan_Meets_OncoMouse. Donna Haraway.

Playing the Past

JoystickIf you’ve followed my blog for any length of time, you’ve probably noticed that I am quite interested in history and that probably about a third of my posts are inspired from Play the Past, a collaborative blog about videogames, history and the places where they meet.  Starting this week, I am now going to be a regular contributor to Play the Past, which means you should be seeing my stuff there about once a month.  I currently have a post up about Historical Wargaming and Asymmetrical Play, which manages to incorporate two of my obsessions, Axis & Allies and Scandinavian history, into a single post.  Now if I can just find a way to write a post about Saturday morning cartoons, Gruyere cheese, and dinosaurs, I’ll be in business.

Hopefully, having a deadline to post over there will help me be more regular about posting over here, as well.  As you may have noticed, I haven’t been posting very regularly at all since I began working on my Master’s Thesis, and pretty much stopped during the last semester of my Master’s program.  Now that I’ve finally managed to sort out that whole mess, I will hopefully not be quite so sick of writing about videogames and I might be able to squeeze out a post from time to time.

I’m very excited to be a part of Play the Past, which already includes a number of scholars whose work I’ve admired for some time.  A few posts have even been required reading for some of the classes I’ve taught over the last few years (they’re also usually a bit more accessible to undergrads than a journal article).  Hopefully, many more amazing posts and interesting conversations will come out of Play the Past in the future, and I’m very glad to be a part of it.