Tagged / Andy Phippen

Conversation article: Should you give your child a ‘dumb’ phone? They aren’t the answer to fears over kids’ social media use

Professor Andy Phippen writes for The Conversation about children’s use of smartphones and technology, and why giving them ‘dumb’ phones with minimal features might not help…

Should you give your child a ‘dumb’ phone? They aren’t the answer to fears over kids’ social media use

dodotone/Shutterstock

Andy Phippen, Bournemouth University

Parents concerned about the possible dangers smartphone use might have for their children are turning to “dumb phones”. These are the brick-shaped or flip phones today’s parents might have had themselves as teenagers, only capable of making calls or sending text messages and lacking access to social media apps.

Phones available include a remake of noughties classic the Nokia 3210, or new designs such as the recently released Barbie flip phone.

But handing children a “dumb phone” seems to be as much an exercise in nostalgia as proactive practice. Ultimately, young people will end up using smartphones in their social and working lives. They have many useful features. It makes sense for them to learn to use them with the support of adults around them in a nurturing environment.

Unhappiness among children and teenagers is often seen as being related to smartphone or social media use. Social psychologist Jonathan Haidt’s 2024 book The Anxious Generation suggests that there is a link between the rise in the use of smartphones by young people and an increase in youth mental health issues.

However, is very difficult to demonstrate a causal link between a specific aspect of modern life and a specific public health concern, as responses to Haidt’s book point out.

Yes, young people use smartphones more than previous generations. But they are also growing up in a world experiencing a global pandemic, visible climate change and international conflict. They’re being told they will never have a job because AI will be doing it instead, and that they’ll never own a house because of price inflation.

It is very difficult in these social contexts to isolate one factor and claim this is what is causing a rise in mental health issues among young people.

Large and rigorous peer reviewed studies have been conducted to explore the correlations between digital technology and children’s mental health. They rarely return a clear link. Some show positive correlations – use of digital tech leading to outcomes such as happiness, being treated with respect and positive learning experiences.

Group of young people smiling at phone
Children will end up using smartphones in their adult lives.
Daniel Hoz/Shutterstock

This doesn’t mean that we can say that smartphones are definitely a bad thing, or – conversely – that they have no negative impact on children. It just means that claims of causation are difficult to prove and irresponsible to make.

I have spent 20 years talking to young people about their use of digital technology. There are certainly risks and concerns. However, there are also many positive uses of this technology which, with the right guard rails, can enhance a child’s life.

While young people talk about concerns around popularity and “fear of missing out”, they also see value in accessible communication with friends and family, which is especially important for those who might live in isolated locations or have physical restrictions. And many say the main reason they would not raise concerns is for fear that adults around them might “freak out” and take their device from them.

Checks and balances

Hearing that seven-year-olds own and use smartphones sounds worrisome. But there’s a difference between, for example, a child using their phone to keep in touch with their grandparents who live in another part of the country with the supervision of their parents, and an unsupervised child interacting with strangers on social media. These are two very different scenarios.

Giving a child a smartphone does not mean allowing them ultimate freedom to use it however and how often they like.

It is perfectly within a parent’s power to restrict the types of apps that are installed, monitor screen time and install software to make sure a young person’s interactions are healthy – as well as talking to their child about social media use. Or, perhaps more simply, implement house rules that a child can only use their smartphone for a certain amount of supervised time.

As their child gets older, parents can relax the restrictions and afford them greater privacy and responsibility in its use. Parents can still make sure their child knows that if something upsetting does occur, they can ask for help.

I have a friend and colleague who is fond of analogising technology use with teaching a child to ride a bike.

Do we give a child a bike on their seventh birthday, put them at the top of a hill, and tell them to figure it out for themselves? No, we help them learn, with safeguards in place, until they develop competencies while also understanding the risks. The approach should be the same with digital technology.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Why bans on smartphones for teenagers could do more harm than good

Professor Andy Phippen writes for The Conversation about growing calls to stop young people having access to smartphones or social media…

Why bans on smartphones or social media for teenagers could do more harm than good

Jacob Lund/Shutterstock

Andy Phippen, Bournemouth University

There are growing calls for young people under the age of 16 to be banned from having smartphones or access to social media. The Smartphone Free Childhood WhatsApp group aims to normalise young people not having smartphones until “at least” 14 years old. Esther Ghey, mother of the murdered teenager Brianna Ghey, is campaigning for a ban on social media apps for under-16s.

The concerns centre on the sort of content that young people can access (which can be harmful and illegal) and how interactions on these devices could lead to upsetting experiences.

However, as an expert in young people’s use of digital media, I am not convinced that bans at an arbitrary age will make young people safer or happier – or that they are supported by evidence around young people’s use of digital technology.

In general, most young people have a positive relationship with digital technology. I worked with South West Grid for Learning, a charity specialising in education around online harm, to produce a report in 2018 based upon a survey of over 8,000 young people. The results showed that just over two thirds of the respondents had never experienced anything upsetting online.

Large-scale research on the relationship between social media and emotional wellbeing concluded there is little evidence that social media leads to psychological harm.

Sadly, there are times when young people do experience upsetting digital content or harm as a result of interactions online. However, they may also experience upsetting or harmful experiences on the football pitch, at a birthday party or playing Pokémon card games with their peers.

It would be more unusual (although not entirely unheard of) for adults to be making calls to ban children from activities like these. Instead, our default position is “if you are upset by something that has happened, talk to an adult”. Yet when it comes to digital technology, there seems to be a constant return to calls for bans.

We know from attempts at prevention of other areas of social harms, such as underage sex or access to drugs or alcohol, that bans do not eliminate these behaviours. However, we do know that bans will mean young people will not trust adults’ reactions if they are upset by something and want to seek help.

Mother and daughter looking at phone
Teenagers need to know they can talk to adults about their lives online.
Studio Romantic/Shutterstock

I recall delivering an assembly to a group of year six children (aged ten and 11) one Safer Internet Day a few years ago. A boy in the audience told me he had a YouTube channel where he shared video game walkthroughs with his friends.

I asked if he’d ever received nasty comments on his platform and if he’d talked to any staff about it at his school. He said he had, but he would never tell a teacher because “they’ll tell me off for having a YouTube channel”.

This was confirmed after the assembly by the headteacher, who said they told young people not to do things on YouTube because it was dangerous. I suggested that empowering what was generally a positive experience might result in the young man being more confident to talk about negative comments – but was met with confusion and repetition of “they shouldn’t be on there”.

Need for trust

Young people tell us that two particularly important things they need in tackling upsetting experiences online are effective education and adults they can trust to talk to and be confident of receiving support from. A 15 year old experiencing abuse as a result of social media interactions would likely not be confident to disclose if they knew the first response would be, “You shouldn’t be on there, it’s your own fault.”

There is sufficient research to suggest that banning under-16s having mobile phones and using social media would not be successful. Research into widespread youth access to pornography from the Children’s Commissioner for England, for instance, illustrates the failures of years of attempts to stop children accessing this content, despite the legal age to view pornography being 18.

The prevalence of hand-me-down phones and the second hand market makes it extremely difficult to be confident that every mobile phone contract accurately reflects the age of the user. It is a significant enough challenge for retailers selling alcohol to verify age face to face.

The Online Safety Act is bringing in online age verification systems for access to adult content. But it would seem, from the guidance by communications regulator Ofcom, that the goal is to show that platforms have demonstrated a duty of care, rather than being a perfect solution. And we know that age assurance (using algorithms to estimate someone’s age) is less accurate for under-13s than older ages.

By putting up barriers and bans, we erode trust between those who could be harmed and those who can help them. While these suggestions come with the best of intentions, sadly they are doomed to fail. What we should be calling for is better understanding from adults, and better education for young people instead.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: When to give your child their first mobile phone and how to keep them safe

Professor Andy Phippen answers some key questions for The Conversation about giving children mobile devices, based on his research into young people and the internet…

When to give your child their first mobile phone – and how to keep them safe

Pressmaster/Shutterstock

Andy Phippen, Bournemouth University

I spend my career researching young people and the internet: what they do online, what they think about it and how their views differ to those of their parents.

I often get questions from parents about their children’s internet use. One of the most common is when to get their child a mobile phone, as well as how to keep them safe when they have one. Here are my answers to some key questions.

How old should my child be when they get their first phone?

I’m afraid I often disappoint parents in my answer to this question by not giving them a definite number. But the key here is what your child is going to use the phone for – and when might be suitable for that individual child.

According to a 2023 report by UK communications regulator Ofcom, 20% of three year olds now own a mobile phone. But this phone may just be used for taking pictures, playing simple games and supervised video calls with family.

The more pertinent question is when children should have their own fully-connected phone, which they can use unsupervised to contact others online.

When a child is primary school age, it’s highly likely that they will be used to adult supervision in most aspects of their life. They will either be at school, at home, with friends and trusted adults or with other family members.

Their need to contact a distant adult may not be that great – but you will want to think about what the specific needs of your own child might be.

Typically the transition from primary school to secondary is when children might be more distant from home, or be involved in school activities or socialising with friends where being able to contact home becomes more important. I have spoken to plenty of young people who talk about starting secondary school as the point where they first had their own phone.

How do I make sure they use a phone safely?

First of all, it’s important that if your child is going online – at whatever age and regardless of the device they’re using – you have a conversation with them about online safety.

Parents have a role to play in educating their children and making them aware of the risks that come with being online, as well as being mindful that most online experiences are not harmful.

I have carried out extensive research with young people on online harms. As part of this research, I and colleagues developed a number of resources for parents, put together with the help of over 1,000 young people.

What these young people say the most is they want to know who to turn to when they need help. They want to be confident they will receive support, not a telling off or confiscation of their phone. This means that a key first step is to reassure your child that they can come to you with any problems they encounter and you will help them without judgment.

It’s also important to discuss with your child what they can and can’t do with their device. This could mean, for instance, setting ground rules about which apps they can have installed on their phone, and when they should stop using their phone at the end of the day.

You should also explore the privacy settings for the apps that your child uses, in order to ensure that they cannot be contacted by strangers or access inappropriate content. The NSPCC has resources for parents on how to use privacy settings.

Should I check my child’s phone?

Sometimes parents ask me about whether they should be able to check a child’s device – either by physically looking at the phone or by using “safetytech”, software on another device that can access the communications on the child’s phone.

Father and son looking at mobile phone
Open conversations about phone use are key.
Khorzhevska/Shutterstock

I believe it’s important that this is also something you discuss with your child. Trust is important to ensure that your child comes to you with any online issues, so if you want to monitor their phone, talk to them about it rather than doing so covertly.

It seems reasonable parental supervision to be accessing a child’s device when they are of primary age, in the same way a parent would check with another child’s parent before agreeing to let them visit their home.

However, as your child gets older, they might not want their parent to see all of their messages and online interactions. The UN Convention on the Rights of the Child clearly states that a child does have a right to privacy.

Should I track my child’s location through their phone?

I have spoken to some families that track each other’s devices in an open and transparent manner, and this is a decision for the family. However, I have also spoken to children who find it very creepy that a teenage friend is tracked by their parents.

The question here is whether parents are reassuring themselves that their child is safe – or whether they want to know what they are doing without them knowing. I had a particularly memorable conversation with someone who told me their friend was extremely upset because their daughter had changed device and so they could no longer track her. When I asked how old the daughter was, they said she was 22.

It’s also worth considering whether tech like this actually provides false reassurance. It may allow parents to know where their child is, but not necessarily whether they are safe.

As with monitoring a child’s phone, it is worth reflecting upon whether a surveillance approach creates the ideal conditions for them to come to you with problems, or whether this is more likely to be fostered by open conversations and an environment of mutual trust.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: ChatGPT isn’t the death of homework – just an opportunity for schools to do things differently

Professor Andy Phippen writes for The Conversation about how educate can adapt to AI technology…

ChatGPT isn’t the death of homework – just an opportunity for schools to do things differently

Daisy Daisy/Shutterstock

Andy Phippen, Bournemouth University

ChatGPT, the artificial intelligence (AI) platform launched by research company Open AI, can write an essay in response to a short prompt. It can perform mathematical equations – and show its working.

ChatGPT is a generative AI system: an algorithm that can generate new content from existing bodies of documents, images or audio when prompted with a description or question. It’s unsurprising concerns have emerged that young people are using ChatGPT and similar technology as a shortcut when doing their homework.

But banning students from using ChatGPT, or expecting teachers to scour homework for its use, would be shortsighted. Education has adapted to – and embraced – online technology for decades. The approach to generative AI should be no different.

The UK government has launched a consultation on the use of generative AI in education, following the publication of initial guidance on how schools might make best use of this technology.

In general, the advice is progressive and acknowledged the potential benefits of using these tools. It suggests that AI tools may have value in reducing teacher workload when producing teaching resources, marking, and in administrative tasks. But the guidance also states:

Schools and colleges may wish to review homework policies, to consider the approach to homework and other forms of unsupervised study as necessary to account for the availability of generative AI.

While little practical advice is offered on how to do this, the suggestion is that schools and colleges should consider the potential for cheating when students are using these tools.

Nothing new

Past research on student cheating suggested that students’ techniques were sophisticated and that they felt remorseful only if caught. They cheated because it was easy, especially with new online technologies.

But this research wasn’t investigating students’ use of Chat GPT or any kind of generative AI. It was conducted over 20 years ago, part of a body of literature that emerged at the turn of the century around the potential harm newly emerging internet search engines could do to student writing, homework and assessment.

We can look at past research to track the entry of new technologies into the classroom – and to infer the varying concerns about their use. In the 1990s, research explored the impact word processors might have on child literacy. It found that students writing on computers were more collaborative and focused on the task. In the 1970s, there were questions on the effect electronic calculators might have on children’s maths abilities.

In 2023, it would seem ludicrous to state that a child could not use a calculator, word processor or search engine in a homework task or piece of coursework. But the suspicion of new technology remains. It clouds the reality that emerging digital tools can be effective in supporting learning and developing crucial critical thinking and life skills.

Get on board

Punitive approaches and threats of detection make the use of such tools covert. A far more progressive position would be for teachers to embrace these technologies, learn how they work, and make this part of teaching on digital literacy, misinformation and critical thinking. This, in my experience, is what young people want from education on digital technology.

Children in class looking at tablets.
Young people should learn how to use these online tools.
Ground Picture/Shutterstock

Children should learn the difference between acknowledging the use of these tools and claiming the work as their own. They should also learn whether – or not – to trust the information provided to them on the internet.

The educational charity SWGfL, of which I am a trustee, has recently launched an AI hub which provides further guidance on how to use these new tools in school settings. The charity also runs Project Evolve, a toolkit containing a large number of teaching resources around managing online information, which will help in these classroom discussions.

I expect to see generative AI tools being merged, eventually, into mainstream learning. Saying “do not use search engines” for an assignment is now ridiculous. The same might be said in the future about prohibitions on using generative AI.

Perhaps the homework that teachers set will be different. But as with search engines, word processors and calculators, schools are not going to be able to ignore their rapid advance. It is far better to embrace and adapt to change, rather than resisting (and failing to stop) it.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Children have been interacting in the metaverse for years – what parents need to know about keeping them safe

Professor Andy Phippen writes for The Conversation about the virtual worlds children access, and how parents can support using them safely.

Children have been interacting in the metaverse for years – what parents need to know about keeping them safe

Frame Stock Footage/Shutterstock

Andy Phippen, Bournemouth University

The metaverse sounds like it could be a scary place. Recent headlines have highlighted the dangers to children of the metaverse – a generic term for the range of online virtual worlds, developed by different tech companies, in which users can interact. Children’s charities have raised concerns about its potential for harm.

Recently, Meta – Facebook’s parent company – announced that teenagers would be able to use its VR Horizon Worlds app in North America. In this online environment, users are represented by avatars and spend time in virtual worlds, making use of virtual reality (VR) headsets. Some politicians in the US have already voiced their unease. It is certainly possible that Meta could extend this access to teens elsewhere in the world.

It would be no surprise if parents were concerned about this technology and how it might affect their children. In fact, children are already online in the metaverse – and there are steps parents can take to understand this technology, the risks it may pose, and what they can do.

Avatars and online games

Perhaps the most famous current interactive world aimed at children is Roblox, an online platform that allows users to create avatars, play games, make their own games, and interact with others. Young people play games developed by other users – the most popular is currently Adopt Me!, in which players adopt animals and live with them in a virtual world.

This mix of gameplay, interaction with others, and opportunity for creativity are all reasons Roblox is so popular. While it can be played using VR headsets, the vast majority of interaction takes place using more traditional devices such as phones, tablets and laptops.

Another emerging platform, Zepeto, has a similar model of allowing users to create environments, access “worlds” developed by others, and chat with others within these environments. Some young people will interact solely with their own group of friends in a specific world; other worlds will allow interaction with people they don’t know.

However, there is a rich history of platforms that could be considered, in modern terminology, to be “metaverses”. One is Minecraft, perhaps the most popular platform before Roblox. Launched in 2011, Minecraft is a block-building game which also allows for interaction with other users.

Before Minecraft, there were other platforms such as multiplayer online games Club Penguin (launched 2005) and Moshi Monsters (launched 2008) which, while smaller in scope, still allowed young people to engage with others on online platforms with avatars they created. These games also attracted moral panics at the time.

While new terms such as the metaverse and unfamiliar technology like VR headsets might make us fear these things are new, as with most things in the digital world, they are simply progressions of what has come before.

And on the whole, the risks remain similar. Headsets in VR-based worlds do present new challenges in terms of how immersive the experience is, and how we might monitor what a young person is doing. But otherwise, there is little new in the risks associated with these platforms, which are still based around interactions with others. Children may be exposed to upsetting or harmful language, or they may find themselves interacting with someone who is not who they claim to be.

Parental knowledge

In my work with colleagues on online harms, we often talk about mitigating risk through knowledge. It is important for parents to have conversations with their children, understand the platforms they are using, and research the tools these platforms provide to help reduce the potential risks.

Most provide parental controls and tools to block and report abusive users. Roblox offers a wide range of tools for parents, ranging from being able to restrict who their children play with to monitoring a child’s interactions in a game. Zepeto has similar services.

As a parent, understanding these tools, how to set them up and how to use them is one of the best ways of reducing the risk of upset or harm to your child in these environments.

However, perhaps the most important thing is for parents to make sure their children are comfortable telling them about issues they may have online. If your child is worried or upset by what has happened on one of these platforms, they need to know they can tell you about it without fear of being told off, and that you can help.

It is also best to have regular conversations rather than confrontations. Ask your child’s opinion or thoughts on news stories about the metaverse. If they know you are approachable and understanding about their online lives, they are more likely to talk about them.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Why children don’t talk to adults about the problems they encounter online

Professor Andy Phippen writes for The Conversation about young people’s social media use and wellbeing…

Why children don’t talk to adults about the problems they encounter online

iSOMBOON/Shutterstock

Andy Phippen, Bournemouth University

“I don’t listen to adults when it comes to this sort of thing”, a 17-year-old told me.

We were discussing how digital technology affects his life, as part of a long-term project in the west of England that I carried out with colleagues to explore young people’s mental health – including the impact of digital technology on their emotional wellbeing.

There is a widespread perception that being online is bad for young people’s mental health. But when we began the project, we quickly realised that there was very little evidence to back this up. The few in-depth studies around social media use and children’s mental health state that impacts are small and it is difficult to draw clear conclusions.

We wanted to find out if and how young people’s wellbeing was actually being affected in order to produce resources to help them. We talked to around 1,000 young people as part of our project. What we found was that there was a disconnect between what young people were worried about when it came to their online lives, and the worries their parents and other adults had.

One of the things young people told us was that adults tended to talk down to them about online harms, and had a tendency to “freak out” about these issues. Young people told us that adults’ views about online harms rarely reflected their own. They felt frustrated that they were being told what was harmful, rather than being asked what their experiences were.

Common concerns

The concerns the young people told us they had included bullying and other forms of online conflict. They were afraid of missing out on both online group interactions and real-life experiences others were showing in their social media posts. They worried that their posts were not getting as many likes as someone else’s.

But these concerns are rarely reflected in the media presentation of the harsher side of online harms. This has a tendency to explore the criminal side of online abuse, such as grooming, the prevalence of online pornography. It also tends to describe social media use in similar language to that used to talk about addiction.

It is no surprise, therefore, that parents might approach conversations with young people with excessive concern and an assumption their children are being approached by predators or are accessing harmful or illegal content.

Mother trying to talk to her daughter who is on tablet with headphones
Young people and their parents’ concerns about online safety may not match up.
George Rudy/Shutterstock

We have run a survey with young people for several years on their online experiences. Our latest analysis was based on 8,223 responses. One of the questions we ask is: “Have you ever been upset by something that has happened online?”. While there are differences between age groups, we found the percentage of those young people who say “yes” is around 30%. Or, to put it another way, more than two-thirds of the young people surveyed had never had an upsetting experience online.

Meanwhile, the online experiences reported by the 30% who reported being upset often didn’t tally with the extreme cases reporting in the media. Our analysis of responses showed that this upset is far more likely to come from abusive comments by peers and news stories about current affairs.

This disconnect means that young people are reluctant to talk to adults about their concerns. They are afraid of being told off, that the adult will overreact, or that talking to an adult might make the issue worse. The adults they might turn to need to make it clear this won’t happen and that they can help.

How to help

There are three things that young people have consistently told us over the duration of the project, and in our previous work, that adults can do to help. They are: listen and understand – don’t judge.

Conversations are important, as is showing an interest in young people’s online lives. However, those conversations do not have to be confrontational. If a media story about young people and online harms causes parents concern or alarm, the conversation does not have to start with: “Do you do this?” This can result in a defensive response and the conversation being shut down. It would be far better to introduce the topic with: “Have you seen this story? What do you think of this?”

Working in partnership with others, such as schools, is also important. If a parent has concerns, having a conversation with tutors can be a useful way of supporting the young person. The tutor might also be aware that the young person is not acting like themselves, or might have noticed changes in group dynamics among their peer group.

But, even if they are not aware of anything, raising concerns with them – and discussing from where those concerns arise – will mean both parents and school are focused in the same direction. It is important that young people receive both consistent messages and support. And schools will also be able to link up with other support services if they are needed.

Ultimately, we want young people to feel confident that they can ask for help and receive it. This is particularly important, because if they do not feel they can ask for help, it is far less likely the issue they are facing will be resolved – and there is a chance things might become worse without support.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Protecting children in the metaverse – it’s easy to blame big tech but we all have a role to play

Professor Andy Phippen writes for The Conversation about child safety in virtual spaces…

Protecting children in the metaverse: it’s easy to blame big tech, but we all have a role to play

Newman Studio/Shutterstock

Andy Phippen, Bournemouth University

In a recent BBC news investigation, a reporter posing as a 13-year-old girl in a virtual reality (VR) app was exposed to sexual content, racist insults and a rape threat. The app in question, VRChat, is an interactive platform where users can create “rooms” within which people interact (in the form of avatars). The reporter saw avatars simulating sex, and was propositioned by numerous men.

The results of this investigation have led to warnings from child safety charities including the National Society for the Prevention of Cruelty to Children (NSPCC) about the dangers children face in the metaverse. The metaverse refers to a network of VR worlds which Meta (formerly Facebook) has positioned as a future version of the internet, eventually allowing us to engage across education, work and social contexts.

The NSPCC appears to put the blame and the responsibility on technology companies, arguing they need to do more to safeguard children’s safety in these online spaces. While I agree platforms could be doing more, they can’t tackle this problem alone.

Reading about the BBC investigation, I felt a sense of déjà vu. I was surprised that anyone working in online safeguarding would be – to use the NSPCC’s words – “shocked” by the reporter’s experiences. Ten years ago, well before we’d heard the word “metaverse”, similar stories emerged around platforms including Club Penguin and Habbo Hotel.

These avatar-based platforms, where users interact in virtual spaces via a text-based chat function, were actually designed for children. In both cases adults posing as children as a means to investigate were exposed to sexually explicit interactions.

The demands that companies do more to prevent these incidents have been around for a long time. We are locked in a cycle of new technology, emerging risks and moral panic. Yet nothing changes.

It’s a tricky area

We’ve seen demands for companies to put age verification measures in place to prevent young people accessing inappropriate services. This has included proposals for social platforms to require verification that the user is aged 13 or above, or for pornography websites to require proof that the user is over 18.

If age verification was easy, it would have been widely adopted by now. If anyone can think of a way that all 13-year-olds can prove their age online reliably, without data privacy concerns, and in a way that’s easy for platforms to implement, there are many tech companies that would like to talk to them.

In terms of policing the communication that occurs on these platforms, similarly, this won’t be achieved through an algorithm. Artificial intelligence is nowhere near clever enough to intercept real-time audio streams and determine, with accuracy, whether someone is being offensive. And while there might be some scope for human moderation, monitoring of all real-time online spaces would be impossibly resource-intensive.

The reality is that platforms already provide a lot of tools to tackle harassment and abuse. The trouble is few people are aware of them, believe they will work, or want to use them. VRChat, for example, provides tools for blocking abusive users, and the means to report them, which might ultimately result in the user having their account removed.

A man assists a child to put on a VR headset.
People will access the metaverse through technology like VR headsets.
wavebreakmedia/Shutterstock

We cannot all sit back and shout, “my child has been upset by something online, who is going to stop this from happening?”. We need to shift our focus from the notion of “evil big tech”, which really isn’t helpful, to looking at the role other stakeholders could play too.

If parents are going to buy their children VR headsets, they need to have a look at safety features. It’s often possible to monitor activity by having the young person cast what is on their headset onto the family TV or another screen. Parents could also check out the apps and games young people are interacting with prior to allowing their children to use them.

What young people think

I’ve spent the last two decades researching online safeguarding – discussing concerns around online harms with young people, and working with a variety of stakeholders on how we might better help young people. I rarely hear demands that the government needs to bring big tech companies to heel from young people themselves.

They do, however, regularly call for better education and support from adults in tackling the potential online harms they might face. For example, young people tell us they want discussion in the classroom with informed teachers who can manage the debates that arise, and to whom they can ask questions without being told “don’t ask questions like that”.

However, without national coordination, I can sympathise with any teacher not wishing to risk complaint from, for example, outraged parents, as a result of holding a discussion on such sensitive topics.

I note the UK government’s Online Safety Bill, the legislation that policymakers claim will prevent online harms, contains just two mentions of the word “education” in 145 pages.

We all have a role to play in supporting young people as they navigate online spaces. Prevention has been the key message for 15 years, but this approach isn’t working. Young people are calling for education, delivered by people who understand the issues. This is not something that can be achieved by the platforms alone.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.