Tagged / Andy Phippen

Conversation article: ChatGPT isn’t the death of homework – just an opportunity for schools to do things differently

Professor Andy Phippen writes for The Conversation about how educate can adapt to AI technology…

ChatGPT isn’t the death of homework – just an opportunity for schools to do things differently

Daisy Daisy/Shutterstock

Andy Phippen, Bournemouth University

ChatGPT, the artificial intelligence (AI) platform launched by research company Open AI, can write an essay in response to a short prompt. It can perform mathematical equations – and show its working.

ChatGPT is a generative AI system: an algorithm that can generate new content from existing bodies of documents, images or audio when prompted with a description or question. It’s unsurprising concerns have emerged that young people are using ChatGPT and similar technology as a shortcut when doing their homework.

But banning students from using ChatGPT, or expecting teachers to scour homework for its use, would be shortsighted. Education has adapted to – and embraced – online technology for decades. The approach to generative AI should be no different.

The UK government has launched a consultation on the use of generative AI in education, following the publication of initial guidance on how schools might make best use of this technology.

In general, the advice is progressive and acknowledged the potential benefits of using these tools. It suggests that AI tools may have value in reducing teacher workload when producing teaching resources, marking, and in administrative tasks. But the guidance also states:

Schools and colleges may wish to review homework policies, to consider the approach to homework and other forms of unsupervised study as necessary to account for the availability of generative AI.

While little practical advice is offered on how to do this, the suggestion is that schools and colleges should consider the potential for cheating when students are using these tools.

Nothing new

Past research on student cheating suggested that students’ techniques were sophisticated and that they felt remorseful only if caught. They cheated because it was easy, especially with new online technologies.

But this research wasn’t investigating students’ use of Chat GPT or any kind of generative AI. It was conducted over 20 years ago, part of a body of literature that emerged at the turn of the century around the potential harm newly emerging internet search engines could do to student writing, homework and assessment.

We can look at past research to track the entry of new technologies into the classroom – and to infer the varying concerns about their use. In the 1990s, research explored the impact word processors might have on child literacy. It found that students writing on computers were more collaborative and focused on the task. In the 1970s, there were questions on the effect electronic calculators might have on children’s maths abilities.

In 2023, it would seem ludicrous to state that a child could not use a calculator, word processor or search engine in a homework task or piece of coursework. But the suspicion of new technology remains. It clouds the reality that emerging digital tools can be effective in supporting learning and developing crucial critical thinking and life skills.

Get on board

Punitive approaches and threats of detection make the use of such tools covert. A far more progressive position would be for teachers to embrace these technologies, learn how they work, and make this part of teaching on digital literacy, misinformation and critical thinking. This, in my experience, is what young people want from education on digital technology.

Children in class looking at tablets.
Young people should learn how to use these online tools.
Ground Picture/Shutterstock

Children should learn the difference between acknowledging the use of these tools and claiming the work as their own. They should also learn whether – or not – to trust the information provided to them on the internet.

The educational charity SWGfL, of which I am a trustee, has recently launched an AI hub which provides further guidance on how to use these new tools in school settings. The charity also runs Project Evolve, a toolkit containing a large number of teaching resources around managing online information, which will help in these classroom discussions.

I expect to see generative AI tools being merged, eventually, into mainstream learning. Saying “do not use search engines” for an assignment is now ridiculous. The same might be said in the future about prohibitions on using generative AI.

Perhaps the homework that teachers set will be different. But as with search engines, word processors and calculators, schools are not going to be able to ignore their rapid advance. It is far better to embrace and adapt to change, rather than resisting (and failing to stop) it.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Children have been interacting in the metaverse for years – what parents need to know about keeping them safe

Professor Andy Phippen writes for The Conversation about the virtual worlds children access, and how parents can support using them safely.

Children have been interacting in the metaverse for years – what parents need to know about keeping them safe

Frame Stock Footage/Shutterstock

Andy Phippen, Bournemouth University

The metaverse sounds like it could be a scary place. Recent headlines have highlighted the dangers to children of the metaverse – a generic term for the range of online virtual worlds, developed by different tech companies, in which users can interact. Children’s charities have raised concerns about its potential for harm.

Recently, Meta – Facebook’s parent company – announced that teenagers would be able to use its VR Horizon Worlds app in North America. In this online environment, users are represented by avatars and spend time in virtual worlds, making use of virtual reality (VR) headsets. Some politicians in the US have already voiced their unease. It is certainly possible that Meta could extend this access to teens elsewhere in the world.

It would be no surprise if parents were concerned about this technology and how it might affect their children. In fact, children are already online in the metaverse – and there are steps parents can take to understand this technology, the risks it may pose, and what they can do.

Avatars and online games

Perhaps the most famous current interactive world aimed at children is Roblox, an online platform that allows users to create avatars, play games, make their own games, and interact with others. Young people play games developed by other users – the most popular is currently Adopt Me!, in which players adopt animals and live with them in a virtual world.

This mix of gameplay, interaction with others, and opportunity for creativity are all reasons Roblox is so popular. While it can be played using VR headsets, the vast majority of interaction takes place using more traditional devices such as phones, tablets and laptops.

Another emerging platform, Zepeto, has a similar model of allowing users to create environments, access “worlds” developed by others, and chat with others within these environments. Some young people will interact solely with their own group of friends in a specific world; other worlds will allow interaction with people they don’t know.

However, there is a rich history of platforms that could be considered, in modern terminology, to be “metaverses”. One is Minecraft, perhaps the most popular platform before Roblox. Launched in 2011, Minecraft is a block-building game which also allows for interaction with other users.

Before Minecraft, there were other platforms such as multiplayer online games Club Penguin (launched 2005) and Moshi Monsters (launched 2008) which, while smaller in scope, still allowed young people to engage with others on online platforms with avatars they created. These games also attracted moral panics at the time.

While new terms such as the metaverse and unfamiliar technology like VR headsets might make us fear these things are new, as with most things in the digital world, they are simply progressions of what has come before.

And on the whole, the risks remain similar. Headsets in VR-based worlds do present new challenges in terms of how immersive the experience is, and how we might monitor what a young person is doing. But otherwise, there is little new in the risks associated with these platforms, which are still based around interactions with others. Children may be exposed to upsetting or harmful language, or they may find themselves interacting with someone who is not who they claim to be.

Parental knowledge

In my work with colleagues on online harms, we often talk about mitigating risk through knowledge. It is important for parents to have conversations with their children, understand the platforms they are using, and research the tools these platforms provide to help reduce the potential risks.

Most provide parental controls and tools to block and report abusive users. Roblox offers a wide range of tools for parents, ranging from being able to restrict who their children play with to monitoring a child’s interactions in a game. Zepeto has similar services.

As a parent, understanding these tools, how to set them up and how to use them is one of the best ways of reducing the risk of upset or harm to your child in these environments.

However, perhaps the most important thing is for parents to make sure their children are comfortable telling them about issues they may have online. If your child is worried or upset by what has happened on one of these platforms, they need to know they can tell you about it without fear of being told off, and that you can help.

It is also best to have regular conversations rather than confrontations. Ask your child’s opinion or thoughts on news stories about the metaverse. If they know you are approachable and understanding about their online lives, they are more likely to talk about them.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Why children don’t talk to adults about the problems they encounter online

Professor Andy Phippen writes for The Conversation about young people’s social media use and wellbeing…

Why children don’t talk to adults about the problems they encounter online

iSOMBOON/Shutterstock

Andy Phippen, Bournemouth University

“I don’t listen to adults when it comes to this sort of thing”, a 17-year-old told me.

We were discussing how digital technology affects his life, as part of a long-term project in the west of England that I carried out with colleagues to explore young people’s mental health – including the impact of digital technology on their emotional wellbeing.

There is a widespread perception that being online is bad for young people’s mental health. But when we began the project, we quickly realised that there was very little evidence to back this up. The few in-depth studies around social media use and children’s mental health state that impacts are small and it is difficult to draw clear conclusions.

We wanted to find out if and how young people’s wellbeing was actually being affected in order to produce resources to help them. We talked to around 1,000 young people as part of our project. What we found was that there was a disconnect between what young people were worried about when it came to their online lives, and the worries their parents and other adults had.

One of the things young people told us was that adults tended to talk down to them about online harms, and had a tendency to “freak out” about these issues. Young people told us that adults’ views about online harms rarely reflected their own. They felt frustrated that they were being told what was harmful, rather than being asked what their experiences were.

Common concerns

The concerns the young people told us they had included bullying and other forms of online conflict. They were afraid of missing out on both online group interactions and real-life experiences others were showing in their social media posts. They worried that their posts were not getting as many likes as someone else’s.

But these concerns are rarely reflected in the media presentation of the harsher side of online harms. This has a tendency to explore the criminal side of online abuse, such as grooming, the prevalence of online pornography. It also tends to describe social media use in similar language to that used to talk about addiction.

It is no surprise, therefore, that parents might approach conversations with young people with excessive concern and an assumption their children are being approached by predators or are accessing harmful or illegal content.

Mother trying to talk to her daughter who is on tablet with headphones
Young people and their parents’ concerns about online safety may not match up.
George Rudy/Shutterstock

We have run a survey with young people for several years on their online experiences. Our latest analysis was based on 8,223 responses. One of the questions we ask is: “Have you ever been upset by something that has happened online?”. While there are differences between age groups, we found the percentage of those young people who say “yes” is around 30%. Or, to put it another way, more than two-thirds of the young people surveyed had never had an upsetting experience online.

Meanwhile, the online experiences reported by the 30% who reported being upset often didn’t tally with the extreme cases reporting in the media. Our analysis of responses showed that this upset is far more likely to come from abusive comments by peers and news stories about current affairs.

This disconnect means that young people are reluctant to talk to adults about their concerns. They are afraid of being told off, that the adult will overreact, or that talking to an adult might make the issue worse. The adults they might turn to need to make it clear this won’t happen and that they can help.

How to help

There are three things that young people have consistently told us over the duration of the project, and in our previous work, that adults can do to help. They are: listen and understand – don’t judge.

Conversations are important, as is showing an interest in young people’s online lives. However, those conversations do not have to be confrontational. If a media story about young people and online harms causes parents concern or alarm, the conversation does not have to start with: “Do you do this?” This can result in a defensive response and the conversation being shut down. It would be far better to introduce the topic with: “Have you seen this story? What do you think of this?”

Working in partnership with others, such as schools, is also important. If a parent has concerns, having a conversation with tutors can be a useful way of supporting the young person. The tutor might also be aware that the young person is not acting like themselves, or might have noticed changes in group dynamics among their peer group.

But, even if they are not aware of anything, raising concerns with them – and discussing from where those concerns arise – will mean both parents and school are focused in the same direction. It is important that young people receive both consistent messages and support. And schools will also be able to link up with other support services if they are needed.

Ultimately, we want young people to feel confident that they can ask for help and receive it. This is particularly important, because if they do not feel they can ask for help, it is far less likely the issue they are facing will be resolved – and there is a chance things might become worse without support.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Protecting children in the metaverse – it’s easy to blame big tech but we all have a role to play

Professor Andy Phippen writes for The Conversation about child safety in virtual spaces…

Protecting children in the metaverse: it’s easy to blame big tech, but we all have a role to play

Newman Studio/Shutterstock

Andy Phippen, Bournemouth University

In a recent BBC news investigation, a reporter posing as a 13-year-old girl in a virtual reality (VR) app was exposed to sexual content, racist insults and a rape threat. The app in question, VRChat, is an interactive platform where users can create “rooms” within which people interact (in the form of avatars). The reporter saw avatars simulating sex, and was propositioned by numerous men.

The results of this investigation have led to warnings from child safety charities including the National Society for the Prevention of Cruelty to Children (NSPCC) about the dangers children face in the metaverse. The metaverse refers to a network of VR worlds which Meta (formerly Facebook) has positioned as a future version of the internet, eventually allowing us to engage across education, work and social contexts.

The NSPCC appears to put the blame and the responsibility on technology companies, arguing they need to do more to safeguard children’s safety in these online spaces. While I agree platforms could be doing more, they can’t tackle this problem alone.

Reading about the BBC investigation, I felt a sense of déjà vu. I was surprised that anyone working in online safeguarding would be – to use the NSPCC’s words – “shocked” by the reporter’s experiences. Ten years ago, well before we’d heard the word “metaverse”, similar stories emerged around platforms including Club Penguin and Habbo Hotel.

These avatar-based platforms, where users interact in virtual spaces via a text-based chat function, were actually designed for children. In both cases adults posing as children as a means to investigate were exposed to sexually explicit interactions.

The demands that companies do more to prevent these incidents have been around for a long time. We are locked in a cycle of new technology, emerging risks and moral panic. Yet nothing changes.

It’s a tricky area

We’ve seen demands for companies to put age verification measures in place to prevent young people accessing inappropriate services. This has included proposals for social platforms to require verification that the user is aged 13 or above, or for pornography websites to require proof that the user is over 18.

If age verification was easy, it would have been widely adopted by now. If anyone can think of a way that all 13-year-olds can prove their age online reliably, without data privacy concerns, and in a way that’s easy for platforms to implement, there are many tech companies that would like to talk to them.

In terms of policing the communication that occurs on these platforms, similarly, this won’t be achieved through an algorithm. Artificial intelligence is nowhere near clever enough to intercept real-time audio streams and determine, with accuracy, whether someone is being offensive. And while there might be some scope for human moderation, monitoring of all real-time online spaces would be impossibly resource-intensive.

The reality is that platforms already provide a lot of tools to tackle harassment and abuse. The trouble is few people are aware of them, believe they will work, or want to use them. VRChat, for example, provides tools for blocking abusive users, and the means to report them, which might ultimately result in the user having their account removed.

A man assists a child to put on a VR headset.
People will access the metaverse through technology like VR headsets.
wavebreakmedia/Shutterstock

We cannot all sit back and shout, “my child has been upset by something online, who is going to stop this from happening?”. We need to shift our focus from the notion of “evil big tech”, which really isn’t helpful, to looking at the role other stakeholders could play too.

If parents are going to buy their children VR headsets, they need to have a look at safety features. It’s often possible to monitor activity by having the young person cast what is on their headset onto the family TV or another screen. Parents could also check out the apps and games young people are interacting with prior to allowing their children to use them.

What young people think

I’ve spent the last two decades researching online safeguarding – discussing concerns around online harms with young people, and working with a variety of stakeholders on how we might better help young people. I rarely hear demands that the government needs to bring big tech companies to heel from young people themselves.

They do, however, regularly call for better education and support from adults in tackling the potential online harms they might face. For example, young people tell us they want discussion in the classroom with informed teachers who can manage the debates that arise, and to whom they can ask questions without being told “don’t ask questions like that”.

However, without national coordination, I can sympathise with any teacher not wishing to risk complaint from, for example, outraged parents, as a result of holding a discussion on such sensitive topics.

I note the UK government’s Online Safety Bill, the legislation that policymakers claim will prevent online harms, contains just two mentions of the word “education” in 145 pages.

We all have a role to play in supporting young people as they navigate online spaces. Prevention has been the key message for 15 years, but this approach isn’t working. Young people are calling for education, delivered by people who understand the issues. This is not something that can be achieved by the platforms alone.The Conversation

Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.