In this blog post
Are Safety Concerns of the Metaverse Real?
“Technology trust is a good thing, but control is a better one.” – Stephane Nappo
On October 28, 2021, Facebook CEO Mark Zuckerberg rebranded the umbrella company as ‘Meta’ and introduced the world to a new term ‘Metaverse’. Since then, Metaverse has dominated everyday conversations, mostly among the younger generation. For the uninitiated, Metaverse is a fully immersive digital universe which uses a combination of multiple technologies like virtual reality, augmented reality, blockchain and 3D modelling. The term ‘Metaverse’ was first coined in a science fiction novel almost 30 years back and now, it is not just reality but also a very big part of Web 3.0. Metaverse is believed to help everyone connect better where they can do things they only dreamt of, making everything from working to learning, more enjoyable. But, like any other technological innovations, this fascinating new technology also has grey areas, which needs to be addressed before it goes mainstream, i.e., safety issues.
Harassment in virtual spaces is not new, especially for women and minorities. And even though metaverse is just taking off, internet safety experts are already seeing signs of trouble regarding safety and harassment. Tiffany Xingyu Wang, Chief Strategy and Marketing Officer at Spectrum Labs, predicts that the personal attacks and harassments that 41% of the US internet users have experienced online, will only get worse in virtual worlds.
Researchers from the Center for Countering Digital Hate (CCHD), a non-profit organization working towards analyzing and disrupting online hate and harassment, spent around 12 hours inside a virtual world to understand the privacy and safety concerns, and the results were shocking. They reported an infringement every SEVEN minutes, including instances of sexual content, racism, abuse, hate, homophobia and misogyny! But it does not end here. All these harassment incidents were often logged around minors’ presence, even though in theory, anyone below 18 years of age is not allowed in that space. However, in practice, very young kids are among the early adopters of this technology which raises a grave concern.
In another horrifying incident, 3-4 male avatars in metaverse verbally and sexually harassed a British woman, Nina Jane Patel, within 60 seconds of her joining the digital world, and even took photos of the same. One important point to note here is that Nina is an expert in this technology herself as she is the co-founder and vice president of Metaverse Research for Kabuni, an immersive technology company based in the U.K., and yet she had such a horrible experience before she could even put safety barriers in place.
Meta themselves are aware of how these virtual worlds can create ‘toxic environment’ for women and minorities and, has promised to protect these spaces from all the malignancies that have plagued Facebook.
“An incident is just the tip of the iceberg, a sign of a much larger problem below the surface” – Dan Brown
Addressing Virtual Harassment
Going back to Nina’s case, opinions from people around the world ranged from supportive positive comments to being dismissive of her experience – “don’t be stupid, it wasn’t real”, “a pathetic cry for attention”, “avatars don’t have lower bodies to assault”, “don’t choose a female avatar, it’s a simple fix.” We must remember that, ‘Harassment is harassment, be it verbal, physical or even virtual’. All forms of harassment are physiologically as well as psychologically disturbing and needs be addressed.
Virtual reality has essentially been designed to stimulate real life in such a way that our mind and body cannot differentiate virtual experiences with real ones. In fact, Mary Anne Franks, President of the Cyber Civil Rights Initiative, mentioned in her paper on virtual and augmented reality that research has indicated abuse in VR is “far more traumatic than in other digital worlds.” VRs can trigger the internal nervous system and psychological response, making emotional reactions stronger in these spaces. Especially in the case of metaverse, there are some powerful features/concepts that can lead to such realism, and they are:
- Immersion: The user will feel that they are in another environment all together.
- Active Presence: The technology has very specific personification aspects to give its users the sense of ‘being there’.
- Embodiment: This contributes to the user’s feeling that their digital avatar is their physical body.
- Proteus Effect: It is the tendency of people to be affected by their virtual representations (avatars). Typically, humans shift behaviours accordance to their digital representations.
Although Meta has already launched its virtual social platform, called Horizon Worlds, the company has not shared much of how it plans to enforce its safety protocols in VR. For now, Meta mostly relies on user blocks, mutes, and reports to notify of Community Standards violations in VR, effectively shifting the onus of user’s safety onto them.
Horizon Worlds also currently has a feature called ‘Safe Zone’ where people can activate a bubble around their avatar if they feel threatened. The users must undergo an onboarding process prior to joining the virtual platform which will teach them how to launch a Safe Zone. Regular reminders are also loaded into screens and posters within Horizon Worlds for this feature. But the fact is, when a person is going through a shocking experiencing, they will ideally freeze and might not be able to immediately think about activating a Safe Zone or even worse, might not be able to access it easily.
“We want everyone in Horizon Worlds to have a positive experience with safety tools that are easy to find—and it’s never a user’s fault if they don’t use all the features we offer,” Meta spokesperson Kristina Milian said. “We will continue to improve our UI and to better understand how people use our tools so that users are able to report things easily and reliably. Our goal is to make Horizon Worlds safe, and we are committed to doing that work.”
When companies try to address online harassment, generally, their solution is to outsource it to the user which is unfair and mostly doesn’t work. Some of the ideas that can be used to make safety easy and accessible are:
- Having a universal signal, like a hand gesture, in virtual reality that could indicate that something is not right.
- Having an automatic personal distance unless two people mutually agreed to be closer.
Having training sessions to explicitly lay out norms mirroring those that already exist in real world.
Importance of Developing Policies and Setting Rules
Tech industries should see the growing numbers of virtual worlds as a chance to get it right this time. Privacy and safety should be included into the metaverse from the designing stage itself, before the product even exists, and use that as a differentiator to attract their audience. It is very important for tech companies to avoid repeating previously made errors in Web 2.0.
There is a pressing need for every corporation, individual or organization running an immersive community online, to prioritize writing solid policies and enforcing those rules of behavior that can protect residents against harassment and abuse. These protocols must also include consequences for people breaking those rules. As a matter of fact, there should be a shared code of conduct to regulate content on such platforms and establish trust and safety. Community policies should be viewed as more than just an insurance policy or an unavoidable cost center. It can be a reason why people join those communities. It will also increase customer retention and reduce acquisition cost.
One major challenge in enforcing a shared code of conduct is: Who is in charge here? There is no clear enforcing entity for such virtual worlds. Ahmer Inam, Chief AI officer at Pactera Edge, thinks that there is potential in learning from concerns about nuclear power that led to international treaties to govern the technology with trustworthiness as a central organizing principle. Similarly, a partnership between public and private entities should expand ethical AI charters to cover the metaverse and develop a shared set of rules about any metaverse world.
Metaverse will become a common social place soon because of the great experiences it has to offer but for now, it is in the initial phase and hence, has many safety issues. We cannot expect a platform to be immediately perfect, but it is essential to have a zero-tolerance approach to this, to not repeat the mistakes of the internet.
Until we see big changes, the metaverse will remain a problematic space. Therefore, it’s time to shake the system to shape the future starting now!