Much of the conversation revolved around the common issues around anonymity and the Internet, namely, that anonymity (or at least the perception of anonymity) shields people from accountability for their actions, removing the consequences for antisocial behavior. Interactions in virtual reality are still a form of Internet-based communication, but the immersive nature of VR provides a much more visceral experience. While many researchers at the summit were doing really interesting work with these particular affordances of VR technology, the question underlying many of the presentations was how to prevent the toxic behaviors that exist in the YouTube comments section from becoming a swarm of virtual characters yelling in your face in a much more literal sense.
Despite the varied approaches to thinking about identity, and particularly the nature of identity within a large network, I was somewhat surprised that all of these interpretations of identity relied heavily upon the assumption of an essential Cartesian self—the authentic “true self” that lies at the center of our being, beneath the pretense and masks that we present to other people. Several of the speakers, particularly Peter Rubin of Wired, noted that identities, particularly when mediated through something like the Internet, are plastic. Yet when returning to the practical concerns of preventing abuse, the conversation always returned to figuring out how VR could get at its users’ true, authentic self.
There were, of course, many differing opinions on how technology should try to get to its users’ authentic self. Rubin suggested that anonymity on the early Internet allowed people to have the confidence and safety to express their inner selves, with chatrooms taking on the role of confessionals. This vulnerability led to deep friendships that formed incredibly fast, though it was often difficult to translate these relationships to more meaningful contexts, as meeting your Internet friends in real life was usually awkward. Face-to-face relationships, by contrast, develop at a slower pace. He argued that when meeting people in person, relationships form slowly as they get to know our outer selves while we keep our true selves private until we feel safe. For Rubin, VR has the potential to be a middle ground—affording safety in the fact that you can always take the headset off, while also allowing for more intimate, embodied, and authentic interactions.
In stark contrast to Rubin’s view on the vulnerability that is possible with the safety of anonymity, Michael Eisenberg reiterated the more common perspective on anonymity on the Internet, namely that shielding people from social consequences brings out anti-social and unethical behaviors. He illustrated his point by referencing the story of the Ring of Gyges from Plato’s Republic, wherein a just man and an unjust man are each given a ring of invisibility. The question is that with no societal consequences for wrongdoing, why would the just man do anything different from the unjust one? For Eisenberg, the lack of social context brings out the worst in humans. The way to get at a person’s authentic self, he argued, was through the social governance that comes with eye contact, touch, and instant feedback and accountability. Rules create positive identities (and for Eisenberg, authentic identities are inherently positive), while lack of rules creates destruction.
Identity and authenticity take on an interesting context in VR, though a headset is certainly not required to play around with such concepts. The interactive nature of videogames inherently intertwines them with such questions through the relationship between the player and the player character. It’s perhaps not surprising that some of the games that deal with the question of identity most directly are also games that experiment with game systems dealing with ethics. Both Ultima IV and Uplink have unique player creation systems designed to encourage the player to identify much more closely with the player character. In Ultima IV, the player’s character is created by presenting her with a series of ethical dilemmas during a fortune telling. By navigating scenarios where different values are pitted against each other, the game attempts to determine what value is most central to the player’s identity. In Uplink, creating a new character is depicted as creating a new account for a remotely accessed computer system. Although the game is technically set in the future (the distant year 2010!), it suggests that the player is actually logging into a remote server and from there hacking real computers around the world. In both cases, the games try to make the player feel like she is playing as herself1. While these games are certainly excellent vehicles for ethical reflection in relationship to one’s identity, they don’t provide us with much of a roadmap for where we want technologies like VR to go. Will strapping on a pair of goggles put us in a confessional booth, or will it hand us the magical ring we read about in Plato?
Of course, to be fair, the argument in Republic that a just man is only just because society is watching and judging him is not made by Plato himself. That particular argument is made by Glaucon, who is essentially playing devil’s advocate as a foil for Socrates, whose position is that justice in itself is profitable for whoever practices it. Socrates eventually refutes this argument, arguing that justice is good for your mind whether or not you receive external rewards. Furthermore, he argues that as a matter of practicality, no one can completely shield themselves from society, so justice is ultimately always more profitable than injustice2. Thus, if we attempt to draw a conclusion from Plato’s perspective, a truly just man would act justly whether he did so under conditions of anonymity or visibility. This doesn’t give us much insight into the question of authenticity.
Just a few weeks after the Virtual Identity Conference, I had the serendipitous opportunity to speak with Rachel Dubrofsky, who has done extensive work relating to the topic of authenticity in the media. Not surprisingly, ideas about anonymity and surveillance come up a lot in media depictions of authenticity. The setting of the film, The Hunger Games, for example, is one of ubiquitous surveillance. Katniss, the main character, is contrasted with the other participants in the games by being her true, authentic self on camera, while the others put on a performance in an attempt to gain advantage over each other:
The magic of a scripted film is that it can fictionally represent life without cameras, in this case enabling the articulation of Katniss’s consistent behavior across disparate spaces in ways a [Reality TV] show cannot: we see Katniss at home with her family and friends, in The Capitol, then in the surveiled space of The Games. The film format and the foregrounding of the contrived context of surveillance of The Games show Katniss seamlessly performing authenticity by behaving instinctively and without artifice, exhibiting consistency across disparate spaces. Carried over into The Hunger Games, the ethic of authenticity—verified through surveillance, central to the [Reality TV] genre—suggests a transmedia trope of likeability, deservingness, and heroism in U.S. culture that exceeds the bounds of the [Reality TV] genre.3
This privileging of the non-performative “true” self, while deeply ingrained in Western culture, has a dubious effect on the message of the film. Katniss chooses to do many extraordinary things—supporting her family through poaching, defying the oppressive government, and ultimately outwitting corrupt officials. As Dubrofsky argues, however, the traits that come out naturally and are therefore signaled by the film as the most authentic for the character tend to be more traditionally feminine traits, such as caring for children and the injured3. In light of such issues, is authenticity really the trait we should be valuing the most? What do we even mean by authenticity?
In a special issue of the Review of General Psychology that came out just over a month ago, Roy Baumeister looked at authenticity research to see if the idea of a true self was even a useful concept4. The term “authenticity” tends to be used in very different ways in academic literature, and even more when you explore how the average person understands it. One of the more troubling implications from his research, however, is that people associate the idea of authenticity less with their true inner nature and more with what society regards as good:
One of the most irksome findings for authenticity researchers was that American research participants, including introverts, generally reported feeling more authentic when acting extroverted than introverted. America is an extroverted society, but still, it is disturbing that even introverts felt more authentic when acting extroverted…If authenticity means acting in accordance with one’s true self, then introverts should by definition feel more authentic when acting in introverted manner….To spell it out, the troubling implication is that people associate authenticity with doing what society regards as good rather than with what their own true inner nature dictates4.
Based on Baumeister’s research, the question of how we uncover a person’s true self is missing the point because what we think of as our “true self” doesn’t actually exist. Indeed, our true self isn’t even necessarily our best self as much as it is the version of ourselves that we think others want to see. Consequently, if our goal is to come up with practical guidelines on how to shape VR into a more useful and robust system of communication, we need to have a different paradigm.
While the assumption of an essential, stable, yet unverifiable internal self permeated much of the discussion at the Virtual Identity Summit, there were a few speakers who were able to talk about identity in original and productive ways. Philip Rosedale, the creator of Second Life (who had a lot of interesting things to say about the future of VR), talked about identity in terms of individual pieces of identifying information that a person owns, such as a driver’s license or an email address. He argued that when looking at identity as a network of discrete facts, the key challenge of VR is giving users the ability to control their level of disclosure in different situations, as no one wants to be a walking LinkedIn profile all the time. He also moved to disentangle the question of identity from that of trust. As a matter of practicality, no one is going to have a single virtual identity. Most people on the Internet already don’t and, at least to a certain extent, we don’t even really have that in real life. People might have virtual identities for work, school, and family, in much the same way that people have multiple email or Twitter accounts. If, however, VR becomes as important and pervasive in our society as Rosedale imagines, we need to manage it in a very different way. Imagine trying to hold an actual town hall meeting with a member of congress if half the people in the room were the VR equivalent of newly formed Twitter accounts. Rosedale argued that the solution to this problem was for every identity to be associated with a measure of trust, though he warned against centralized systems of surveillance like U.S. credit scores or China’s social credit system. His proposal was for a system that was open, distributed, and flexible, for example, giving someone the ability to filter out all but the identities of people trusted by their first-order friends.
My favorite take on dealing with practical questions of identity and authenticity in VR was probably that of Brittan Heller from Harvard, who spoke specifically about the question of harassment. Rather than talking about how to make VR a safe and useful space by finding a way for the technology to access users’ true, authentic selves, she focused on behaviors, not individual speakers. She noted that on Twitter, for example, 80 percent of the people who break the rules are doing so for the first time and could easily be guided away from such behaviors in the future, often just by informing them about the rules (because chances are they haven’t read the fifty page long terms of service). The vast majority of abuse comes from the other 20 percent, who are persistent abusers and among them, even smaller segments of that group have an outsized level of influence due to the architecture of the system. Leading up to the 2016 presidential elections, her group found that the millions of anti-Semitic tweets directed at journalists, which had a reach of 10 billion accounts, were coming from only 1,600 users.
One of the interesting points that Heller made was about using social connections, engaging bystanders, and applying positive peer pressure to establish group norms. I think in many ways, this gets at many of the points that both Rubin and Eisenberg were trying to get at with their various perspectives on anonymity, vulnerability, visibility and accountability. In Heller’s model, however, the goal is not to use social connections to either bring out or suppress some some innate character attribute of the individual. Instead, the building of social connections is the goal in itself. While she mentioned that people have created APIs and plugins that contextualize users’ actions within broader social conditions (by, for example, stopping and explaining the historical significance of a slur before someone sends it), she encouraged developers to rely on humans, rather than machines. This point goes both for abusers, who often acknowledge in retrospect that they didn’t realize the impact their behaviors had on an actual person, as well as for victims, who often feel further dehumanized when their appeals for help are endlessly routed through automated systems without anyone hearing them.
The idea of social connections as the basic unit for building communities in VR was probably the idea that resonated most with me from the whole conference. While I think it’s fair to say that all of the presentations were quite thought provoking, lofty and abstract goals of finding self-actualization, building cyber-utopias, and transcending geographical barriers are better at providing inspiration than they are at providing actionable guidelines. Indeed, while I’ve seen a lot of ambitious demos of VR and AR technologies over the years, I’d never really considered getting my own system until I saw Ron Millar demonstrating VR Chat at the summit. Compared to most other VR games out there on Steam and other platforms, VR Chat is a pretty simple concept. It is literally just a program for setting up chat rooms in VR. Nevertheless, just watching a bunch of people that met through the program hanging out, talking, listening to music, and showing off their custom shaders was really compelling. Even though they were just a bunch of people, most of whom had never met, with avatars ranging from furries to dragons to abstract geometric shapes, many of these had begun to crystallize into more permanent, stable identities (Millar lamented the fact that he’d been using a default testing avatar for so long that now no one recognizes him unless he’s using it) with various levels of trust and respect within the community. Though this is far from Rosedale’s blockchain-driven public trust system, I think it lends credence to his vision.
Even with all the diverse perspectives that were shared at the summit, it does feel like we’ve only begun to scratch the surface on questions of identity in VR, both from a technological and societal standpoint. There are a lot of competing discourses about how authenticity and privacy should be handled as VR technology matures, and I hope people will continue to ask these questions proactively going forward.
References
1. Designing Ethical Systems for Videogames. Peter Christiansen.
2. Ringing the Changes on Gyges: Philosophy and the Formation of Fiction in Plato’s Republic. Andrew Laird. Journal of Hellenic Studies.
3. The Hunger Games: Performing Not-Performing to Authenticate Femininity and Whiteness. Rachel Dubrofsky and Emily Ryalls. Critical Studies in Media Communication.
4. Stalking the True Self Through the Jungles of Authenticity: Problems, Contradictions, Inconsistencies, Disturbing Findings—and a Possible Way Forward. Roy Baumeister. Review of General Psychology.