Tagged / metaverse

Conversation article: Protecting children in the metaverse – it’s easy to blame big tech but we all have a role to play

Professor Andy Phippen writes for The Conversation about child safety in virtual spaces…

Protecting children in the metaverse: it’s easy to blame big tech, but we all have a role to play

Newman Studio/Shutterstock

Andy Phippen, Bournemouth University

In a recent BBC news investigation, a reporter posing as a 13-year-old girl in a virtual reality (VR) app was exposed to sexual content, racist insults and a rape threat. The app in question, VRChat, is an interactive platform where users can create “rooms” within which people interact (in the form of avatars). The reporter saw avatars simulating sex, and was propositioned by numerous men.

The results of this investigation have led to warnings from child safety charities including the National Society for the Prevention of Cruelty to Children (NSPCC) about the dangers children face in the metaverse. The metaverse refers to a network of VR worlds which Meta (formerly Facebook) has positioned as a future version of the internet, eventually allowing us to engage across education, work and social contexts.

The NSPCC appears to put the blame and the responsibility on technology companies, arguing they need to do more to safeguard children’s safety in these online spaces. While I agree platforms could be doing more, they can’t tackle this problem alone.

Reading about the BBC investigation, I felt a sense of déjà vu. I was surprised that anyone working in online safeguarding would be – to use the NSPCC’s words – “shocked” by the reporter’s experiences. Ten years ago, well before we’d heard the word “metaverse”, similar stories emerged around platforms including Club Penguin and Habbo Hotel.

These avatar-based platforms, where users interact in virtual spaces via a text-based chat function, were actually designed for children. In both cases adults posing as children as a means to investigate were exposed to sexually explicit interactions.

The demands that companies do more to prevent these incidents have been around for a long time. We are locked in a cycle of new technology, emerging risks and moral panic. Yet nothing changes.

It’s a tricky area

We’ve seen demands for companies to put age verification measures in place to prevent young people accessing inappropriate services. This has included proposals for social platforms to require verification that the user is aged 13 or above, or for pornography websites to require proof that the user is over 18.

If age verification was easy, it would have been widely adopted by now. If anyone can think of a way that all 13-year-olds can prove their age online reliably, without data privacy concerns, and in a way that’s easy for platforms to implement, there are many tech companies that would like to talk to them.

In terms of policing the communication that occurs on these platforms, similarly, this won’t be achieved through an algorithm. Artificial intelligence is nowhere near clever enough to intercept real-time audio streams and determine, with accuracy, whether someone is being offensive. And while there might be some scope for human moderation, monitoring of all real-time online spaces would be impossibly resource-intensive.

The reality is that platforms already provide a lot of tools to tackle harassment and abuse. The trouble is few people are aware of them, believe they will work, or want to use them. VRChat, for example, provides tools for blocking abusive users, and the means to report them, which might ultimately result in the user having their account removed.

A man assists a child to put on a VR headset.
People will access the metaverse through technology like VR headsets.
wavebreakmedia/Shutterstock

We cannot all sit back and shout, “my child has been upset by something online, who is going to stop this from happening?”. We need to shift our focus from the notion of “evil big tech”, which really isn’t helpful, to looking at the role other stakeholders could play too.

If parents are going to buy their children VR headsets, they need to have a look at safety features. It’s often possible to monitor activity by having the young person cast what is on their headset onto the family TV or another screen. Parents could also check out the apps and games young people are interacting with prior to allowing their children to use them.

What young people think

I’ve spent the last two decades researching online safeguarding – discussing concerns around online harms with young people, and working with a variety of stakeholders on how we might better help young people. I rarely hear demands that the government needs to bring big tech companies to heel from young people themselves.

They do, however, regularly call for better education and support from adults in tackling the potential online harms they might face. For example, young people tell us they want discussion in the classroom with informed teachers who can manage the debates that arise, and to whom they can ask questions without being told “don’t ask questions like that”.

However, without national coordination, I can sympathise with any teacher not wishing to risk complaint from, for example, outraged parents, as a result of holding a discussion on such sensitive topics.

I note the UK government’s Online Safety Bill, the legislation that policymakers claim will prevent online harms, contains just two mentions of the word “education” in 145 pages.

We all have a role to play in supporting young people as they navigate online spaces. Prevention has been the key message for 15 years, but this approach isn’t working. Young people are calling for education, delivered by people who understand the issues. This is not something that can be achieved by the platforms alone.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Facebook goes ‘Meta’: what does it mean in practice?

Dr Carina E I Westling and Dr Hongchuan Yu write about Facebook’s recent announcement and what the ‘Metaverse’ will entail… 

Facebook’s recent announcement about its new focus creates more questions than clarity. The new brand name suggests a desire for greater confluence which, in line with the legacy business model inherited from Facebook, means closer meshing with the totality of our personal and professional lives. How this will manifest is yet to be seen but the devil will be in the detail, as Meta will create both challenges and opportunities for innovation in design and policy. With close attention to emerging technologies and the policy frameworks that support their implementation, researchers and educators at the Faculty of Media and Communication at Bournemouth University are collaborating with industry advisors to make our new programmes BA Immersive Media and BSc Virtual and Augmented Reality crucibles for responsible creative development.

The ‘embodied internet’ is an oxymoron, but virtual (VR) and to some extent also augmented (AR) and mixed reality (XR), technologies seek to produce an approximation of physical experience. With 5G, many scenarios of real-time interaction based on cloud computing can be fulfilled. This offers new possibilities to the creative industries through VR/AR/XR technologies, ostensibly to realise the ‘metaverse’; the convergence of our physical and digital lives. However, you cannot accelerate connectivity without proportionate risks of exposure.

Effective storytelling will need to be ethical 

Our research and teaching programmes are geared towards development of the human skills that drive excellent storytelling in and beyond games and experience design, and we are keenly aware of the changing policy landscape that is sure to follow in the wake of interactive VR/AR/XR. Since 2015-2016, the management of and risks associated with the type of personal data that is the bread and butter of all free-at-the-point-of-use, audience-facing digital platforms is a top priority, and the opening up of VR/AR/XR technologies to real-time interactivity will raise the stakes further.

Meta’s vision, or rather proposition for a technologically convergent interactive and social space is broad, meaning that audiences will comprise naïve users in everyday situations as well as seasoned users in professional situations, and every type of audience in between. At scale, services, social spaces and interactive storytelling designed for this virtual milieu will present new challenges to research and development, including known and ‘unknown unknown’ problems with data management and security. Delivery of complex interactive media environments with default open web connectivity will create a host of new attack surfaces for cybercriminals and digital mavericks. Public appetite for more exposure – particularly of children and vulnerable adult populations – to malign actors is about as great as their trust in the brand that Meta seeks to leave behind.

No alt text provided for this image

Broad adoption, sustainable development and effective storytelling in this domain will require that research, design and production are framed in a clear commitment to ethical principles and mitigation of risks to privacy and data security. Early publicity materials indicate their awareness of this, but Zuckerberg & co. still have to regain the trust of peers and public. That is not to say Meta is doomed to join Second Life – its reception in the industry press may have been on the chilly side, but the rebrand presents an opportunity to be more than a clean slate. We will need to see unflinching recognition of past errors and genuine steps taken to integrate data security with appropriate risk modelling and attention to scaling effects. But if Meta walks the walk, it may come to play a part in, and perhaps even lead the ‘coming of age’ of social media.

High stakes 

As Meta, Facebook are planning to spend at least $10 billion on metaverse-related projects this year. Bloomberg Intelligence further predicts “The global metaverse revenue opportunity could approach $800 billion in 2024”. Whether we greet such developments with enthusiasm or trepidation, it is clear that social media will see a step change even if we cannot be certain of its nature.

Original VR technology was derived from computer graphics and relied on specialist hardware to deliver expert applications such as surgery training and planning, high-end games and flight simulators. In addition to 5G, recent advances in computer vision and machine learning (sometimes called AI) technologies applied to VR/AR/XR technologies may help realise their broad adoption, which is the Meta vision for a 3D, virtual, social space where you might share, in real time, experiences that aren’t feasible in the physical world.

Technology marketing has not always delivered on its promises but innovation has created real change, and content producers will need to be aware of developments in this domain. As Cathy Hackl says: “If the internet and social media changed your business or changed the way you interact with people, then you should be paying attention to what 3.0 and the metaverse will do, because it will change those things as well.” We might speculate about effects on how we tell stories and socialise remotely, but we will almost certainly see this type of platform used as a productivity tool, made more relevant by imperatives to reduce travel and carbon footprints.

As with most predictions, the actuality is likely to be more prosaic than any utopias or dystopias we conjure up, but probably not unimportant. In the past decade and a half, social media have become a critical concern with real-world impacts. It will be interesting to see if Meta can finally shed Facebook’s unfortunate association with FaceMash, Zuckerberg’s jockish student experiment. Growing up is overdue.

By Dr Carina E I Westling and Dr Hongchuan Yu, Bournemouth University

This piece was originally published on BU’s LinkedIn page