How to Govern the Metaverse

To build healthy communities in virtual reality, we must move beyond automated penalties toward proactive forms of governance. Games can show us how.
Collage of images of woman wearing VR headset ominous hands reaching out
Photo-Illustration: Sam Whitney; Getty Images; Alvios Games

It was 2016, and Jordan Belamire was excited to experience QuiVr, a new fantastical virtual reality game, for the first time. With her husband and brother-in-law looking on, she put on a VR headset and became immersed in a snowy landscape. Represented by a disembodied set of floating hands along with a quiver, bow, and hood, Belamire was now tasked with taking up her weapons to fight mesmerizing hordes of glowing monsters.

But her excitement quickly turned sour. Upon entering online multiplayer mode and using voice chat, another player in the virtual world began to make rubbing, grabbing, and pinching gestures at her avatar. Despite her protests, this behavior continued until Belamire took the headset off and quit the game.

My colleagues and I analyzed responses to Belamire’s subsequent account of her “first virtual reality groping” and observed a clear lack of consensus around harmful behavior in virtual spaces. Though many expressed disgust at this player’s actions and empathized with Belamire’s description of her experience as “real” and “violating,” other respondents were less sympathetic—after all, they argued, no physical contact occurred, and she always had the option to exit the game.

Incidents of unwanted sexual interactions are by no means rare in existing social VR spaces and other virtual worlds, and plenty of other troubling virtual behaviors (like the theft of virtual items) have become all too common. All these incidents leave us uncertain about where “virtual” ends and “reality” begins, challenging us to figure out how to avoid importing real-world problems into the virtual world and how to govern when injustice happens in the digital realm.

Now, with Facebook predicting the coming metaverse and the proposal to move our work and social interactions into VR, the importance of dealing with harmful behaviors in these spaces is drawn even more sharply into focus. Researchers and designers of virtual worlds are increasingly setting their sights on more proactive methods of virtual governance that not only deal with acts like virtual groping once they occur, but discourage such acts in the first place while encouraging more positive behaviors too.

These designers are not starting entirely from scratch. Multiplayer digital gaming—which has a long history of managing large and sometimes toxic communities—offers a wealth of ideas that are key to understanding what it means to cultivate responsible and thriving VR spaces through proactive means. By showing us how we can harness the power of virtual communities and implement inclusive design practices, multiplayer games help pave the way for a better future in VR.

The laws of the real world—at least in their current state—are not well-placed to solve the real wrongs that occur in fast-paced digital environments. My own research on ethics and multiplayer games revealed that players can be resistant to “outside interference” in virtual affairs. And there are practical problems, too: In fluid, globalized online communities, it’s difficult to know how to adequately identify suspects and determine jurisdiction.

And certainly, technology can’t solve all of our problems. As researchers, designers and critics pointed out at the 2021 Game Developers Conference, combatting harassment in virtual worlds requires deeper structural changes across both our physical and digital lives. But if doing nothing is not an option, and if existing real-world laws can be inappropriate or ineffective, in the meantime we must turn to technology-based tools to proactively manage VR communities.

Right now, one of the most common forms of governance in virtual worlds is a reactive and punitive form of moderation based on reporting users who may then be warned, suspended, or banned. Given the sheer size of virtual communities, these processes are often automated: for instance, an AI might process reports and implement the removal of users or content, or removals may occur after a certain number of reports against a particular user are received.

While these kinds of responses can be effective in the short-term and demonstrate clear consequences for disruptive behavior, they have distinct problems. Because they are reactive, they do little to prevent problematic behaviors or support and empower marginalized users. Automation is helpful in managing huge amounts of users and material, but it also leads to false positives and negatives, all while raising further concerns surrounding bias, privacy, and surveillance.

As an alternative, some multiplayer games have experimented with democratic self-governance. Perhaps most famously, Riot Games implemented a Tribunal system that allowed players to review reports against other players and vote on their punishments in the multiplayer game League of Legends. A lack of accuracy and efficiency saw it shelved a few years later, but a similar system known as Overwatch continues to live on in Valve’s CS:GO and Dota 2. Forms of self-governance in VR are also on Facebook’s radar: A recent paper by researchers working with Oculus VR suggests that the company is interested in promoting community-driven moderation initiatives across individual VR applications as a “potential remedy” to the challenges of top-down governance.

These kinds of systems are valuable because they allow virtual citizens to play a role in the governance of their own societies. However, co-opting members of the community to do difficult, time-consuming, and emotionally laborious moderation work for free is not exactly an ethical business model. And if—or when—toxic hate groups flourish, it is difficult to pinpoint who should be responsible for dealing with them.

One way of addressing these obstacles is to hire community managers (CMs). Commonly employed by gaming and social VR companies to manage virtual communities, CMs are visible people who can help facilitate more proactive and democratic decisionmaking processes while keeping both users and developers of VR accountable. CMs can remind players of codes of conduct and can sometimes warn, suspend, or ban users; they can also bring player concerns back to the development team. CMs may have a place in the metaverse too, but only if we figure out how to treat them properly.

Often the first port of call for gaming communities, CMs are the (virtually) smiling faces that welcome new players, generate hype around a game, and convey messages between developers and players. But it would be a mistake to think their role is merely to market a product: As they guide users through a membership life cycle from wide-eyed visitors to respected elders, they also help set good examples for positive behavior, reinforce codes of conduct, and set the right tone for a community—the same way a community elder might do in the physical world.

Assigning community managers in VR spaces adds empathy and the all-important “human touch” to the governance process. By boosting a sense of belonging, responsibility, and human presence, CMs can—at least in theory—help minimize problematic behaviors brought about through anonymity and automation.

Unfortunately, CMs are currently incredibly undervalued, undertrained, and underpaid, and frequently face a barrage of death threats, rape threats, and other forms of abuse from the users they are hired to care for. If community managers are to play a role in governing the virtual worlds of metaverse, we must ensure that this essential work is better supported and compensated. An overworked and uninformed CM is likely to do (and come to) more harm than good.

Although best practices are still being worked out, the Fair Play Alliance—a coalition of gaming companies that aims to foster healthy gaming communities—has shared a framework for disruption and harms in gaming that offers advice on managing communities alongside the development of penalty and reporting systems. Combined with adequate pay, evidence-informed training, and in-house emotional support, these kinds of resources will help put CMs in a much better position to serve virtual communities sustainably.

VR spaces are, at their core, designed spaces. As such, the mechanics of the digital environment are also central to governance. Over a decade ago, Nick Yee, social scientist and cofounder of the gaming research company Quantic Foundry, argued that a multiplayer game’s framework of rules and coded design—its “social architecture”—can shape the interactions we have in virtual worlds. And if we can design virtual worlds to enable antagonistic interactions, we can design them to facilitate prosocial ones too.

Such design choices can be quite subtle and unexpected. Yee noted that in the multiplayer game Everquest, players who died in the game lost their loot and had to travel back to the site of their demise to retrieve it. This design feature helped facilitate altruistic behaviors, Yee suggested, as players had to ask each other for help in retrieving their lost items. In less playful VR spaces, one way of channelling this effect (along with more protective efforts) could involve encouraging users to ask others for help with virtual tasks such as onboarding, moving through or altering the environment, or acquiring avatar flair, giving users opportunities to actualize their more positive values.

To some extent, we’ve already started to see how the unique affordances of VR can be used to benefit users through design in other ways. In response to Belamire’s account, the QuiVr developers implemented a power gesture: a hand motion that, much like a “superpower”, turns on a personal bubble that causes offending avatars in the immediate vicinity to be muted and disappear from a user’s view (and vice versa) until the user chooses to turn it off. While largely symbolic, this gesture shows how crucial it is for developers to take these issues seriously. Having control over one’s personal space is important in the virtually embodied realm of VR, and simple, intuitive hand gestures that allow us to immediately control who or what we see could empower users in any VR space in a way that is simply impossible in the physical world.

To be sure, some proactive design approaches that work in games may not work in more serious VR spaces. For instance, encouraging players to endorse each other for good teamwork may reduce toxicity in games like Overwatch, but endorsing your colleagues in a VR work environment may feel like kindergarten at best or insidiously dystopian at worst (think MeowMeowBeenz).

Yet there’s still room for unconventional approaches to be further explored in VR: what if, for instance, we sentenced offending avatars to perform virtual community service, or undergo virtual mentoring or counseling programs? The idea of issuing virtual-world responses inspired by real-world penalties may sound absurd, but such approaches are not totally unheard of. The gaming platform Steam publicly labels the profiles of players who have been banned for cheating, and in 2015 the president of Daybreak Game Company invited cheaters to publicly apologize for their actions in order to be un-banned from the game H1Z1. As in the real world, virtual public shaming and incarceration present particular ethical concerns. But with careful scrutiny, more rehabilitative and restorative responses to virtual transgressions could have a meaningful place in the governance of the metaverse too.

As we explore how to govern users in VR, it is necessary to address the vital question of who is likely to be left out of these spaces. Existing biases have a nasty way of sneaking into our technological designs, resulting in virtual worlds that are hostile or inaccessible to particular groups. The physical requirements of VR can make it difficult for people with certain disabilities (such as visual impairments) to participate. Avatars are an anchor through which a person connects with and navigates a virtual space, but research in gaming reveals they are often designed in ways that misrepresent and exclude people of color. And as Belamire’s experience shows us, interactions in VR can be particularly harmful to women in ways that discourage them from participating at all.

Those who are left out of virtual worlds are often not so coincidentally underrepresented in virtual-world research and design teams as well. It is therefore imperative for us to acknowledge the barriers to inclusion that people face and promote diverse voices early in the development process. Positively, there have been increasing efforts to build more inclusive games that are also being translated to VR—The AbleGamers Charity, for instance, works with the gaming industry to make games more accessible to people with disabilities. These kinds of focused efforts are essential in helping us to avoid the reinforcement of existing divides and inequities in the VR metaverse.

Despite these challenges, it is imperative that we embrace—and demand a commitment to—our shared social responsibility to nurture flourishing and vibrant VR communities. A balanced approach to restrictions and penalties has an important role to play, as may real-world law. But we must be wary of relying too heavily on automated moderation, suspensions, and bans, which make up only one part of building healthy virtual worlds. “Out of sight, out of mind” is simply not a good enough maxim to live our virtual lives by. As our rich history of managing communities in multiplayer games shows, virtual governance can (and must) be much more than that.

By continuing to draw on our rich experiences in multiplayer gaming to explore the community-driven, inclusive, and empowering potentials of VR, we can help build digital communities we actually want to be a part of. The quality of our virtually-real lives depends on it.


More Great WIRED Stories