Bournemouth University PhD student Md. Shafkat Hossain has been invited to attend the international Safety 2024 conference in India in September. The 15th World Conference on Injury Prevention & Safety Promotion (Safety 2024) will be held 2-4 September at the Taj Palace in New Delhi. Safety 2024 global event will focus worldwide attention on safety and injury prevention. This conference will gather international experts in the field with a united goal of “Building a safer future for all: Equitable and sustainable strategies for injury and violence prevention”.
Shafkat will be presenting this PhD work to date under the title ‘Using Human-Centred Design (HCD) to develop community-led interventions to prevent drowning among children under the age of 2 in rural Bangladesh’. Mr. Md. Shafkat Hossain who has been selected by Bloomberg Philanthropies as one of the Emerging Leaders in Drowning Prevention programme. This programme has been designed to create a cohort of younger leaders to join national and international efforts to raise awareness and strengthen solutions and political commitment towards drowning. This programme is hosted by the Global Health Advocacy Incubator and provides a unique opportunity for people like Shafkat to develop leadership skills in drowning prevention, and be a part of a global community working to reduce drowning deaths. This first group of Emerging Leaders includes people from Bangladesh, Ghana, India, Uganda, United States and Vietnam.
Shafkat’s PhD study is part of the interdisciplinary Sonamoni study. Sonamoni is coordinated by BU in collaboration with Centre for Injury Prevention and Research, Bangladesh (CIPRB), the University of the West of England, Bristol, the University of Southampton, Design Without Borders (DWB) in Uganda, and the Royal National Lifeboat Institution (RNLI). We are working to reduce drownings among newly-mobile children, generally under two years old. This £1.6m project has been made possible thanks to a grant from the National Institute for Health and Care Research (NIHR) through their Research and Innovation for Global Health Transformation programme. For more information about our ongoing research in Bangladesh, please visit the NIHR website.
In a recent BBC news investigation, a reporter posing as a 13-year-old girl in a virtual reality (VR) app was exposed to sexual content, racist insults and a rape threat. The app in question, VRChat, is an interactive platform where users can create “rooms” within which people interact (in the form of avatars). The reporter saw avatars simulating sex, and was propositioned by numerous men.
The results of this investigation have led to warnings from child safety charities including the National Society for the Prevention of Cruelty to Children (NSPCC) about the dangers children face in the metaverse. The metaverse refers to a network of VR worlds which Meta (formerly Facebook) has positioned as a future version of the internet, eventually allowing us to engage across education, work and social contexts.
The NSPCC appears to put the blame and the responsibility on technology companies, arguing they need to do more to safeguard children’s safety in these online spaces. While I agree platforms could be doing more, they can’t tackle this problem alone.
Reading about the BBC investigation, I felt a sense of déjà vu. I was surprised that anyone working in online safeguarding would be – to use the NSPCC’s words – “shocked” by the reporter’s experiences. Ten years ago, well before we’d heard the word “metaverse”, similar stories emerged around platforms including Club Penguin and Habbo Hotel.
These avatar-based platforms, where users interact in virtual spaces via a text-based chat function, were actually designed for children. In both cases adults posing as children as a means to investigate were exposed to sexually explicit interactions.
The demands that companies do more to prevent these incidents have been around for a long time. We are locked in a cycle of new technology, emerging risks and moral panic. Yet nothing changes.
It’s a tricky area
We’ve seen demands for companies to put age verification measures in place to prevent young people accessing inappropriate services. This has included proposals for social platforms to require verification that the user is aged 13 or above, or for pornography websites to require proof that the user is over 18.
If age verification was easy, it would have been widely adopted by now. If anyone can think of a way that all 13-year-olds can prove their age online reliably, without data privacy concerns, and in a way that’s easy for platforms to implement, there are many tech companies that would like to talk to them.
In terms of policing the communication that occurs on these platforms, similarly, this won’t be achieved through an algorithm. Artificial intelligence is nowhere near clever enough to intercept real-time audio streams and determine, with accuracy, whether someone is being offensive. And while there might be some scope for human moderation, monitoring of all real-time online spaces would be impossibly resource-intensive.
The reality is that platforms already provide a lot of tools to tackle harassment and abuse. The trouble is few people are aware of them, believe they will work, or want to use them. VRChat, for example, provides tools for blocking abusive users, and the means to report them, which might ultimately result in the user having their account removed.
We cannot all sit back and shout, “my child has been upset by something online, who is going to stop this from happening?”. We need to shift our focus from the notion of “evil big tech”, which really isn’t helpful, to looking at the role other stakeholders could play too.
If parents are going to buy their children VR headsets, they need to have a look at safety features. It’s often possible to monitor activity by having the young person cast what is on their headset onto the family TV or another screen. Parents could also check out the apps and games young people are interacting with prior to allowing their children to use them.
What young people think
I’ve spent the last two decades researching online safeguarding – discussing concerns around online harms with young people, and working with a variety of stakeholders on how we might better help young people. I rarely hear demands that the government needs to bring big tech companies to heel from young people themselves.
They do, however, regularly call for better education and support from adults in tackling the potential online harms they might face. For example, young people tell us they want discussion in the classroom with informed teachers who can manage the debates that arise, and to whom they can ask questions without being told “don’t ask questions like that”.
However, without national coordination, I can sympathise with any teacher not wishing to risk complaint from, for example, outraged parents, as a result of holding a discussion on such sensitive topics.
I note the UK government’s Online Safety Bill, the legislation that policymakers claim will prevent online harms, contains just two mentions of the word “education” in 145 pages.
We all have a role to play in supporting young people as they navigate online spaces. Prevention has been the key message for 15 years, but this approach isn’t working. Young people are calling for education, delivered by people who understand the issues. This is not something that can be achieved by the platforms alone.