On a day like any other on the Internet, a few women woke up to the news that they were on sale. It is important to note that these were Muslim women. Still more important is the fact that these were outspoken Muslim women – activists, journalists, lawyers. It happened again, six months later. This time, the culprits were caught; but that was that. No larger conversations around systemic harm and its ubiquity in the virtual world.
The big question of our times is thus: what does it mean to be a person online?
It appears that to harass or harm someone online, the fact that a real body isn’t involved makes it easier. To seek justice and accountability, however, is difficult because – again – a “real” body was not involved. The second question, therefore, is this: does being a person with rights necessarily entail having a body, or being embodied, at all times?
Take online money frauds, however, for contrast. When theft takes place in the virtual realm, “real” money is lost – the full force of the law descends upon the criminal behind the crime. In the larger scheme of things, money is more embodied online than people are. People are disembodied images, texts, pixels on a screen.
Tech overlords define for us what is and isn’t real, and credit wins over consciousness in this battle. It’s an a la carte of morality: where there is harm, there is no body involved, hence there is no harm or no real harm-doer. Problem solved.
Amid all this, Facebook – now Meta – announced its plan to build a “metaverse.” Microsoft is on board with the idea too, as is Disney. A corporate trifecta can soon define the terms of our reality online. It would be easy to dismiss this as yet another techno gimmick that’s largely irrelevant to most of the world. But a closer look at what the metaverse really is gives us pause. It is virtual reality, where “avatars” of ourselves can occupy digital space, have interactions, work, and play as “us”, albeit in hyperreal simulation. It would, to put it simply, blur the boundaries between real and virtual more than ever before in our history. One needn’t be able to afford the Oculus glasses required to access this augmented reality; many would argue that we already do live in virtual reality. The metaverse is just another update to the user experience. But at what cost?
Gamers have navigated virtual reality for a while now, and have already experienced the pitfalls of occupying space in a largely unregulated environment. Take this user’s experience in 2016, when she was groped while playing a game: “His floating hand approached my body, and he started to virtually rub my chest. “Stop!” I cried… This goaded him on, and even when I turned away from him, he chased me around, making grabbing and pinching motions near my chest. Emboldened, he even shoved his hand toward my virtual crotch and began rubbing.”
Message boards would later hotly debate whether this was “real” sexual harassment. And yet, years later, a chillingly similar incident took place in Horizon Worlds, Meta’s newly launched VR social media platform.
“At the end of the day, the nature of virtual-reality spaces is such that it is designed to trick the user into thinking they are physically in a certain space, that their every bodily action is occurring in a 3D environment… It’s part of the reason why emotional reactions can be stronger in that space, and why VR triggers the same internal nervous system and psychological responses,” Katherine Cross, who researches online harassment at the University of Washington, told MIT Technological Review.
“We will effectively transition from people seeing us as primarily being a social media company to being a metaverse company,” Zuckerberg said a few months ago, announcing Facebook’s rebrand and pivot towards VR. But this was against the backdrop of alarming revelations on Facebook’s role in everything from misinformation, to fuelling hate speech in India, to genocide in Myanmar. If a platform whose interface is largely textual could do this, what can we expect from an “evolution” into its next form?
There is reason to worry, because the same companies that already profit from extracting data on social media are building the metaverse under the guise of a better digital experience. “What is the underlying economic model for AR and VR models? These companies will suck more and more user data, and they will build more incentives for people to spend more time in these environments,” notes Apar Gupta, director of the Internet Freedom Foundation. A look at the patents that Meta is applying for are a good indication of this. This means that polarizing behaviours that are enabled on social media could be incentivized in VR too, making it that much more unsafe for vulnerable people. Because when we step into virtual reality we step into a world whose rules are defined from the top-down.
Related on The Swaddle:
“We barely have adequate protections or frameworks to protect people from harm, particularly marginalized groups within digital spaces as we know it… you then have something like the metaverse where the problem will become deeply amplified,” says Urvashi Aneja, founder of the Digital Futures Collective. Aneja explains that the problem is multifold. First, jurisdiction issues will complicate any frameworks of protection from hate speech, crimes, or other forms of violence in VR. “The way these things get prioritized is secondary to the engagement metric. What we would need… is ground rules and policies that are contextual to wherever the tech is being implemented,” says Divyansha Sehgal, from the Centre for Internet and Society.
The second is accountability. “It becomes very hard to kind of distinguish between who is human, who is bot, what is real, what is synthetic media,” Aneja adds.
Facebook has already furthered community harm in ways that the law has not yet caught up with. And yet, the emphasis is on censorship and content moderation – where liability is thin, and the problem can “go away” with just a click of a button.
Except, it’s not quite so simple. Accountability gets harder to define in any virtual setting. If so much harm can be done with mere images, as revenge porn and the Sulli deals incidents have shown, how much more can be done to sentient, 3-dimensional avatars? There is an “update” in how embodied we can be online, with no corresponding update in how the law understands this new reality. Will our avatars have human rights? Will avatars that harm others be liable to the same punishments as they would offline? We still do not recognize harm to communities as a unique category of harm on social media. If we did, the “Sulli deals” incident would have been dealt with as systematic disenfranchisement rather than individualized harassment.
The problem is that currently, there is confusion over what bodily integrity means in VR. The first articulation of this problem was in 1993, during an incident of violent sexual assault in a VR space. Journalist Julian Dibbell, in his essay “A Rape in Cyberspace,” notes that the facts over what happened get complicated “… for the simple reason that every set of facts in virtual reality (or VR, as the locals abbreviate it) is shadowed by a second, complicating set: the “real-life” facts. And while a certain tension invariably buzzes in the gap between the hard, prosaic RL facts and their more fluid, dreamy VR counterparts, the dissonance… is striking.”
“New technology is always received with a breathless and excited promise of techno-utopia, that then devolves into techno-dystopia because we aren’t paying attention to how the virtual world maps and mirrors social reality,” notes Isha Bhallamudi, gender and tech researcher from the University of California, Irvine.
The platforms are designed for upper caste, white people – we don’t even have the answers for the analog or “text” version of this right now.
Bhallamudi notes how the current model of platforms is to put the onus on users to report unwanted activity and keep themselves safe – this is likely to be exacerbated in VR. Moderators, for instance, are touted as “democratic intermediaries” in a relatively unregulated forum of discussion. But who moderates the moderators? As Divyansha Sehgal, from the Centre for Internet and Society points out: platforms like Discord and Steam have been known to host Alt-right Nazi content and discussions, leading to questions about what safe spaces really mean in a context where real life marginalization is easily transposed onto the virtual, but the same cannot be said for real-life protections.
Related on The Swaddle:
Anonymity is another aspect of the problem of harm online that makes the question of accountability difficult to answer. “It allows no stakes in a conversation and makes the discourse a little worse. Anonymity can be taken to the next level in the metaverse,” Sehgal adds. And even if people can rely on anonymity to keep themselves safe, it once again puts the onus of safety on users themselves.
“We need to start thinking about digital spaces as extensions of our analog spaces, and the same rights and duties and responsibilities that apply in digital spaces must carry forth to analog spaces,” Aneja says. This is particularly true for India, she notes, where the transition to “digital” is pushed by the government as part of the country’s growth narrative. Not participating in the digital is becoming less optional.
In other words, the distinction between “real” and “virtual” is no longer quite so stark – it never has been. Much of our lives are lived online with the advent of the pandemic. Still, experts note that we are very far from instituting even the most basic forms of protection in the digital space that we do have right now. We also are far from the conversation in asking about the new kind of vulnerabilities that the metaverse could bring about, for any kind of framework to even apply.
Employees at Facebook are aware of this too.
“There should be transparency,” says Shweta Mohandas, from the Centre for Internet and Society. “The platform should clearly notify the user about their rights in the virtual world, the duties of the platform, the code of conduct and reporting mechanism, as well as state clearly how the users data and interactions will be processed.”
However, this would also mean reclaiming ownership of virtual space. As of right now, the metaverse is less democratic than real life in that it is governed by content moderation over any kind of cooperation or open communication. The adjudicators of harm are still the platforms, with very little room for a dialogic process of any kind when harm takes place. As media studies professor and video game designer Ian Bogost noted, “The metaverse was never a fantasy about virtual reality, but just one about power.”
“It is a near certainty that it will become mainstream,” says Apar Gupta, director of the Internet Freedom Foundation. The issue by itself isn’t limited to the presence or absence of laws, but the much larger problem is the issue of power by itself. “Indian society is inherently tribalistic, patriarchal, caste-driven, with huge economic inequalities… This has manifested itself very clearly online.”
And it will continue to do so, in ever-expanding proportions. “[T]he metaverse itself is a place that is addictive, violent, and an enabler of our worst impulses…[it is an] odd vision built from a compendium of juvenile fantasies, perceived market opportunities, and overt dystopias,” writes Brain Merchant for Vice.
Yet the metaverse holds two promises simultaneously: it would make things much more real, while at the same time, providing an escape from the “real” real. If this is all confusing, it is because it is meant to be: developers define virtual reality, itself an oxymoronic phrase, nebulously to drive curiosity and excitement about progress for the sake of progress.
Maya Indira Ganesh from the Cambridge Digital Humanities Lab argues that claims about the metaverse being “more real” are to be questioned. “The things that have happened to people on social media in the past 10 years are all real.”
Which is to say, we already live in virtual reality: “A place where there is a certain kind of projection or experience or existence that is separated from what you consider material — whether it’s your body or space or objects. To people 100 years ago, our lives are already the stuff of science fiction,” Ganesh says. Storytelling, cinema, and even the radio broadcasting speeches of monarchs and leaders were all very real to us, even if they were virtually shared. There is a tantalizing prospect of fantasy involving real psyches, making virtual reality a new beast to reckon with. And yet, our laws hardly reflect this.
Ganesh points to a way forward from this mess: regulate corporations, not content. She also cites regulation from the ground-up, rather than a top-down approach, as a strategy that could challenge who sets the rules of the new worlds we inhabit (and indeed, already do inhabit).
If we take our experiences, our minds, and our thoughts seriously, if we consider things we cannot touch and feel to be as “real” as those that we can, we might be better positioned to advocate for our rights as digital citizens rather than consumers. Many have begun this fight already – the question is, is anybody listening?