Category / the conversation

Conversation article – Tackling in children’s rugby must be banned to curb dementia risks

Rugby World Cup winners have joined a chorus of voices calling to reduce tackling in the sport in a bid to stop the growing number of brain injuries afflicting many of its former players.

When the likes of 42-year-old Rugby World Cup winner, Steve Thompson, announced that he could not remember the tournament because his brain was left too damaged from his career, he highlighted that rugby, in its current state, is not fit for contemporary society.

Invented in the 1800s, when safety was far less of a concern, rugby has been resistant to change. But this week, Thompson and 80 other high profile former rugby players announced that they are living with dementia, with many experiencing as early as their 40s.

Another former England player, Michael Lipman, said:

If I knew then what I know now, in terms of how I’m feeling, and what my wife and family go through on a daily basis, I definitely would have been a hell of a lot more careful.

Players are suing several governing bodies, including World Rugby and the Rugby Football Union. The law suit, which is in its infancy, will no doubt grow in claimants.

World Rugby responded to the lawsuit with a statement, saying it “takes takes player safety very seriously and implements injury-prevention strategies based on the latest available knowledge, research and evidence”.

Professional rugby will have its reckoning in the courts. But if the impact of tackling on the brain is strong enough that devoted rugby heroes are suing their former employer, policies need to be drastically revised and soon, particularly for children. The first thing the sport must do is protect young players by banning tackling for under 18s and transitioning to touch rugby.

In denial

Chronic Traumatic Encephalopathy is not new. It was first described in the 1920s in boxers (it was called “punch drunk syndrome” at the time). But now research is proving what scientists, players and their families have long claimed – that repeated collisions are causing permanent damage to the brain.

When England’s 1966 World Cup-winning football heroes began to be diagnosed with dementia, the football world took notice. Now it is England’s World Cup rugby heroes that are suffering – and suffering younger. The FA banned heading the ball in training for children up to the age of 12, and severely restricted it after. It’s time for the rugby unions to react in the same way.




Read more:
NFL concussion lawsuit payouts reveal how racial bias in science continues


But whereas football can remove heading from the game, rugby is predicated on collision. As noted by one journalist, the only way to make rugby safe in its current format is to stop playing it. And earlier this year, researchers linked with England’s Rugby Football Union found that their sport offers more head trauma than other sports.

The case of rugby is more reminiscent of what happened to the National Football League in the US after the discovery that players were at increased risk of long-term neurological conditions, particularly CTE. Scores of players sued the NFL and received a US$1bn pay out.

The NFL has, for now, survived. World Rugby has insurance, so it might too. Yet surviving this lawsuit is only one threat to the sport. The fear over children playing the game will no doubt be rugby’s biggest threat.

No more half measures

Sporting bodies can no longer take half measures and policy must evolve to protect the huge numbers of children playing rugby. Children receive legal protection from other known harms, the list for which is very long (smoking and alcohol use, for example).




Read more:
Concussion can accelerate ageing of the brain – research from the rugby pitch


Both football and rugby are regularly played by children, and particularly in school PE. But whereas children under 12 are not permitted to head the ball in practice, they can tackle another player in rugby training. And where children over 12 are permitted to only head the ball five times a month in football, they can be tackled by a player twice their size as often as the PE teacher decides.

Experts are now calling for tackling to be removed from the sport for children – and curtailed in practice for adults. This means that children should play touch rugby until they are 18. They can then make an informed decision to transition to tackle rugby or continue with touch when they are old enough.

Research shows that touch rugby is rising in popularity and has better health outcomes for children. But calls for bans on tackling in compulsory school rugby have gone unheeded for many years.

History shows that industries respond to health crises when they are forced to do so – either through legal cases or government legislation. A key example is how the tobacco companies were forced to stop denying the harmful effects of smoking in the 1990s. Rugby is no different. Public pressure and court cases may drive change at some level but legislation is needed to protect players, particularly children.

In the US, the Concussion Legacy Foundation has launched the “tackling can wait” campaign for American Football. It’s time for the UK to follow and protect its children from brain injury by banning tackling in youth rugby. It will be for the courts and the players’ unions to determine how much tackling adults can do – but if they have any sense, they will heed the warnings of those World Cup heroes.

Eric Anderson, Professor of Masculinities, Sexualities and Sport, University of Winchester; Adam John White, Lecturer, Oxford Brookes University, and Keith Parry, Deputy Head Of Department in Department of Sport & Events Management, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article – Football and dementia: heading must be banned until the age of 18

Shutterstock/Wallenrock

Keith Parry, Bournemouth University; Eric Anderson, University of Winchester, and Howard Hurst, University of Central Lancashire

Alarm bells are ringing in sport about the risk of a group of chronic, neuro-degenerative diseases, commonly understood as dementia. There is an increasingly large body of evidence which has identified that small, repetitive collisions of the brain inside the skull cause this disease.

More high-profile players from England’s 1966 World Cup-winning squad are getting dementia and heading the football is to blame. It is now time for a blanket ban on heading until the age of 18, and from then on it should be closely monitored and reduced.

It is not just the big collisions that end with players being carried off the pitch or taken to hospital for tests that appear to be causing the problem. It is the small, daily collisions – the ones which happen with routine. Research has found that one particular form of dementia (known as chronic traumatic encephalopathy or CTE) seems to only exist among those who, as part of routine activities, incur these regular assaults to the brain.

What is CTE? Dr. Ann McKee explains.

This issue was touched upon in the improperly titled Will Smith movie Concussion (because the disease is located in thousands of small hits, not one big one) and the Netflix Documentary, Killer Inside, about the NFL player, Aaron Hernandez who suffered from CTE. Indeed, recent research on American football has shown that 3.5 years of play doubles the chances of dementia.

This issue is now gaining attention in the UK, with research showing a shift in attitudes in rugby union, and within the “Beautiful Game” as well.

Repetitive impacts

Jeff Astle, a member of England’s 1970 World Cup squad, became the first British footballer confirmed to have died from CTE – classed as an industrial injury. Astle’s family had long claimed it was heading the ball that was to blame. But it was only when England’s 1966 World Cup-winning heroes began to be diagnosed with dementia that the football world really took notice.

This link cannot be dismissed as a result of older, heavy balls that were replaced by lighter balls in recent years. This is a myth, as both older and new balls weigh 14-16oz. And while older balls got heavier when wet, they travelled slower and were less likely to be kicked to head height in games.

Recent studies show that heading the ball, even just 20 times in practice, causes immediate and measurable alterations to brain functioning. These results have been confirmed in other heading studies and are consistent with research on repetitive impacts that occur from other sports such as downhill mountain biking, resulting from riding over rough terrain.




Read more:
Tour de France: does pro-cycling have a concussion problem?


More worryingly, in a large study of former professional footballers in Scotland, when compared to matched controls, players were significantly more likely to both be prescribed dementia medications and to die from dementia – with a 500% increase in Alzheimer’s.

These findings finally pressured the FA into changing the rules for youth football. In February 2020, the FA denied direct causation but followed what America had done five years earlier and changed its guidelines concerning heading the ball.

The current guidelines don’t stop children from heading the ball in matches, but they do forbid heading the ball as part of training until the age of 12 – when it is gradually introduced. These measures do not go far enough.

A new campaign, called Enough is Enough, and an accompanying seven-point charter was launched in November which calls for a radical intervention into heading in football. Former England captains, Wayne Rooney and David Beckham have supported it, while 1966 legend Sir Geoff Hurst has also backed a ban on kids heading the ball.

And the players union, the PFA, has now called for heading in training by professional players to be reduced and monitored.

The demands in this charter will be costly, as they concern aftercare for those with dementia and more expensive research into the issue. But the most significant demand they make is to protect professional players from dementia by severely limiting header training to no more than 20 headers in any training session with at minimum of 48 hours between sessions involving heading.

These progressive policies should not be delayed by those in the sport, such as the medical head of world players’ union Fifpro, Dr Vincent Gouttebarge, who claimed that more research is required. Governing bodies can no longer take half measures or call for further discussion. This discussion has been taking place for 50 years.

Bring in the ban

Brain trauma in sport is not a medical question, it is a public health crisis. If the evidence is strong enough that the PFA has advocated “urgent action” to reduce heading in training for adult athletes, then heading policies for children – in both training and matches – need to be drastically revised as a matter of urgency.

While media attention focuses largely on the tragedy of lost football heroes, this is a much larger problem for youth players. Less than .01% of the people who play football in this country play at the professional level – but almost half of all children aged 11-15 play the game.

If children are permitted to head the ball between the ages of 12 and 18, this means six years of damaging behaviour. Children are not able to make informed decisions and need to be protected. There is no logical reason for the ban on heading footballs in training to stop at the age 12. Headers can wait until 18. The sport will survive just fine without them.

Keith Parry, Deputy Head Of Department in Department of Sport & Events Management, Bournemouth University; Eric Anderson, Professor of Masculinities, Sexualities and Sport, University of Winchester, and Howard Hurst, Senior lecturer in Sport, Exercise and Nutrition Sciences, University of Central Lancashire

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Resist the temptation to see Dominic Cummings as a svengali

For many cabinet ministers, Dominic Cummings’ departure from 10 Downing Street will be seen as an opportunity for a reset. A controversial figure from the start, the hope is that Prime Minister Boris Johnson will pursue a different style of government without the influence of his chief adviser.

Cummings raised eyebrows with his strong views on the need for civil service reform and his call for misfits and weirdos with odd skills to join the Downing Street team. His abrasiveness has caused no end of problems for Johnson. And his decision to break lockdown rules while the rest of the country stayed home earlier this year, wrought havoc on Johnson’s ability to enforce coronavirus restrictions. But we often slide into thinking of Cummings as a svengali and of Johnson as being under his thrall – as opposed to being his boss.

Describing Cummings in this way is part of a wider discourse regarding special advisers and spin doctors which has pervaded UK politics for some years. In the early days of Tony Blair’s New Labour government, Peter Mandelson, the architect of party reform, was characterised widely as a svengali.

The idea of the svengali comes from a character in George du Maurier’s 1894 novel Trilby. Despite being an antisemitic caricature, the term svengali is recognised by the Oxford English Dictionary as describing “a person who exercises a controlling or mesmeric influence on another, especially for a sinister purpose”.

Like the original fictional Svengali, Mandelson was characterised in cartoons as a spider. Journalist Quentin Letts described him as being “infamous as a dripper of poison, a man to fear, qualities which have caused division and loathing in his own party”.

Alastair Campbell, Blair’s spin doctor, was given similar attention. He was nicknamed the svengali of spin and described as the man whispering in the prime minister’s ear – the real deputy prime minister, despite being unelected and unaccountable.

Damian McBride, Gordon Brown’s director of communications, was exposed for planning an anti-Conservative smear campaign, and yet somehow managed to return to Downing Street as an adviser. Theresa May’s special advisers Nick Timothy and Fiona Hill were characterised as a “toxic clique” responsible both for division within the party and her disastrous performance in the 2017 general election.

When advisers fall, their every dark act is exposed and their demise celebrated. Meanwhile the political leaders are given a second chance. But is it fair to pin the failures of a government onto an individual appointed by that leader?

In du Maurier’s novel Trilby, the title character is a naive half-Irish laundress in Paris searching for love. Svengali attempts to make her a star, and she falls under his spell, enthralled by the promise of fame and fortune. Under hypnosis, she is convinced she has talent, but as his influence wanes she finds herself exposed on stage. Svengali and Trilby both meet a tragic end, the latter dying clutching a picture of her erstwhile guru.

Poor, vulnerable Boris

Painting special advisers as svengalis allows the political leader to be portrayed as the innocent at the mercy of their gurus. It enables them to appear heroic when they are finally freed from their clutches. But this is essentially a piece of spin in itself. Political leaders from Blair to Johnson hire these figures because of their expertise and skills – and often because they have personal relationships with them. Neither Mandelson, Campbell nor Cummings are hypnotists able to control the minds of their political masters. They are appointed due to a shared worldview and, like any adviser, make convincing claims to have the qualities and expertise to help the leader meet their political goals.

While the individuals are often flawed, we should view them not as svengalis but as fall guys: the ones who take the blame when the flaws in the machine of government are exposed. Cummings’ exit may be a source of celebration, but will the next phase of the Johnson government really be more in touch with the people? Recent history suggests not. Blair post-Campbell, and May after the exit of Timothy and Hill, fared no better in the court of public opinion. Johnson, too, may struggle to find a new team to reset the image of his governing style.

Darren Lilleker, Professor of Political Communication, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: When did humans first go to war?

Cain and Abel.
Palma il Giovane

Martin Smith, Bournemouth University and John Stewart, Bournemouth University

When modern humans arrived in Europe around 40,000 years ago, they made a discovery that was to change the course of history.

The continent was already populated by our evolutionary cousins, the Neanderthals, which recent evidence suggests had their own relatively sophisticated culture and technology. But within a few thousand years the Neanderthals were gone, leaving our species to continue its spread to every corner of the globe.

Precisely how Neanderthals became extinct remains a subject of fierce debate among researchers. The two main explanations given in recent years have been competition with the recently arrived modern humans and global climate change.

The persistence of Neanderthal genetic material in all modern people outside of Africa shows the two species interacted and even had sex. But it’s possible that there were other kinds of interactions as well.

Some researchers have suggested that competition for resources such as prey and raw materials for stone tools may have taken place. Others have proposed violent interactions and even warfare took place, and that this may have caused the Neanderthals’ demise.

This idea might seem compelling, given our species’ violent history of warfare. But proving the existence of early warfare is a problematic (although fascinating) area of research.

War or murder?

New studies keep moving the threshold at which there is evidence for human warfare progressively earlier. But finding such evidence is fraught with problems.

Only preserved bones with injuries from weapons can give us a secure indication of violence at a given time. But how do you separate examples of murder or a family feud from prehistoric “war”?

Human skeleton on rocky surface.
Preserved skeletons provide the best evidence of early warfare.
Thomas Quine/Wikimedia, CC BY

To an extent, this question has been resolved by several examples of mass killing, where whole communities were massacred and buried together at a number of European sites dating to the Neolithic period (about 12,000 to 6,000 years ago, when agriculture first emerged).

For a while, these discoveries appeared to have settled the question, suggesting that farming led to a population explosion and pressure for groups to fight. However, even earlier instances of group killing suggested by the bones of hunter gatherers have re-opened the debate.

Defining warfare

A further challenge is that it is very difficult to arrive at a definition of war applicable to prehistoric societies, without becoming so broad and vague that it loses meaning. As social anthropologist Raymond Kelly argues, while group violence may take place among tribal societies, it is not always regarded as “war” by those involved.

For example, in the dispensation of justice for homicide, witchcraft or other perceived social deviance, the “perpetrator” might be attacked by a dozen others. However, in such societies acts of warfare also commonly involve a single individual being ambushed and killed by a coordinated group.

Both scenarios essentially look identical to an outside observer, yet one is regarded as an act of war while the other is not. In this sense, war is defined by its social context rather than simply by the numbers involved.

A key point is that a very particular kind of logic comes into play where any member of an opposing group is seen as representing their whole community, and so becomes a “valid target”. For example, one group might kill a member of another group in retribution for a raid that the victim wasn’t involved in.

In this sense, war is a state of mind involving abstract and lateral thinking as much as a set of physical behaviours. Such acts of war may then be perpetrated (usually by males) against women and children as well as men, and we have evidence of this behaviour among skeletons of early modern humans.

Fossil record

So what does all this mean for the question of whether modern humans and Neanderthals went to war?

There is no doubt that Neanderthals engaged in and were the recipients of acts of violence, with fossils showing repeated examples of blunt injuries, mostly to the head. But many of these predate the appearance of modern humans in Europe and so cannot have occurred during meetings between the two species.

Similarly, among the sparse fossil record of early anatomically modern humans, various examples of weapon injuries exist, but the majority date to thousands of years after the Neanderthals’ disappearance.

Where we do have evidence of violence towards Neanderthals it is almost exclusively among male victims. This means it is less likely to represent “warfare” as opposed to competition between males.

While there is no doubt Neanderthals committed violent acts, the extent to which they were capable of conceptualising “war” in the way it is understood by modern human cultures is debatable. It is certainly possible that violent altercations could have taken place when members of the small, scattered populations of these two species came into contact (although we have no conclusive evidence for such), but these cannot realistically be characterised as warfare.

Certainly, we can see a pattern of violence-related trauma in modern human skeletons from the Upper Palaeolithic period (50,000 to 12,000 years ago) that remains the same into the more recent Mesolithic and Neolithic times. However, it is not at all clear that Neanderthals follow this pattern

Illustration of Neolithic family around a fire on a grassy plain.
Neanderthals probably struggled to survive in colder, more open habitats.
Pixabay

On the bigger question of whether modern humans were responsible for the extinction of Neanderthals, it’s worth noting that Neanderthals in many parts of Europe seem to have gone extinct before our species had arrived. This suggests modern humans can’t be completely to blame, whether through war or competition.

However, what was present throughout the period was dramatic and persistent climate change that appears to have decreased the Neanderthals’ preferred woodland habitats. Modern humans, although they had just left Africa, seem to have been more flexible to different environments and so better at dealing with the increasingly common colder open habitats that may have challenged Neanderthals’ ability to survive.

So although the first modern Europeans may have been the first humans capable of organised warfare, we can’t say this behaviour was responsible or even necessary for the disappearance of Neanderthals. They may have simply been the victims of the natural evolution of our planet.

Martin Smith, Principal Academic In Forensic and Biological Anthropology, Bournemouth University and John Stewart, Associate Professor of Evolutionary Palaeoecology, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Expanding marine protected areas by 5% could boost fish yields by 20% – but there’s a catch

Sweetlips shoal in the Raja Ampat marine protected area, Indonesia.
SergeUWPhoto/Shutterstock

Peter JS Jones, UCL and Rick Stafford, Bournemouth University

Marine protected areas, or MPAs as they’re more commonly called, are very simple. Areas of the sea are set aside where certain activities – usually fishing – are banned or restricted. Ideally, these MPAs might be placed around particularly vibrant habitats that support lots of different species, like seagrass beds or coral reefs. By preventing fishing gear such as towed seabed trawls from sweeping through these environments, the hope is that marine life will be allowed to recover.

When used well, they can be very effective. MPAs have been shown to increase the diversity of species and habitats, and even produce bigger fish within their bounds. A new study argues that by expanding the world’s MPAs by just 5%, we could boost future fish catches by at least 20%. This could generate an extra nine to 12 million tonnes of seafood per year, worth between USD$15-19 billion. It would also significantly increase how much nutritious fish protein is available for a growing human population to eat.

So what’s the catch?

Spillover versus blowback

The scientific rationale is sound. We already know that MPAs can increase the numbers of fish living inside them, which grow to be bigger and lay more eggs. The larvae that hatch can help seed fish populations in the wider ocean as they drift outside the MPA, leading to bigger catches in the areas where fishing is still permitted. We know fish can swim large distances as adults too. While some find protection and breed inside MPAs, others will move into less crowded waters outside where they can then be caught. Together, these effects are known as the spillover benefits of MPAs.

The study is the first to predict, through mathematical modelling, that a modest increase in the size of the world’s MPAs could swell global seafood yields as a result of this spillover. But while the predictions sound good, we have to understand what pulling this off would entail.

The study maintains that the new MPAs would need to be carefully located to protect areas that are particularly productive. Locating MPAs in remote areas offshore, which are hard to access and typically unproductive, would have much smaller benefits for marine life than smaller, inshore MPAs that local fishing vessels can reach. Just 20 large sites in the remote open ocean account for the majority of the world’s MPAs. As the low hanging fruit of marine conservation, these MPAs are often placed where little fishing has occurred.

A world map showing the locations of marine protected areas.
A minority of the world’s MPAs are strict no-take zones.
Marine Conservation Institute/Wikipedia, CC BY-SA

The MPAs themselves would also need to be highly protected, meaning no fishing. Only 2.4% of the world’s ocean area has this status. Increasing this by a further 5% would mean roughly trebling the coverage of highly protected MPAs, and that’s likely to provoke a great deal of resistance. Many fishers are sceptical that spillover can boost catches enough to compensate for losing the right to fish within MPAs and tend to oppose proposals to designate more of them.

People in the UK are often surprised to learn that fishing is allowed in most of the country’s MPAs. While 36% of the waters around the UK are covered by them, only 0.0024% ban fishing outright. Increasing the number and size of highly protected MPAs from just these four small sites to 5% of the UK’s sea area would represent more than a 2,000-fold increase. This would be strongly resisted by the fishing industry, snatching the wind from the sails of any political effort ambitious enough to attempt it.

Keeping fishers on board

Gaining the support of local fishers is crucial for ensuring fishing restrictions are successful. That support depends on fishers being able to influence decisions about MPAs, including where they’ll be located and what the degree of protection will be. Assuming that designing highly protected MPA networks is mostly a matter of modelling is a mistake, and implies that fishers currently operating in an area would have little say in whether their fishing grounds will close.

A fisher on a wooden boat casts a net into tropical water at dusk.
Ensuring fishers buy into a new MPA is crucial for its success.
Sutipond Somnam/Shutterstock

But this study is valuable. It provides further evidence for how MPAs can serve as important tools to conserve marine habitats, manage fisheries sustainably and make food supplies more secure. It’s important to stress the political challenges of implementing them, but most scientists agree that more MPAs are needed. Some scientists are pushing to protect 30% of the ocean by 2030.

As evidence of the benefits of MPAs continues to emerge, the people and organisations governing them at local, national and international scales need to learn and evolve. If we can start implementing some highly protected MPAs, we can gather more evidence of their spillover benefits. This could convince more fishers of their vital role in boosting catches, as well as keeping people fed and restoring ocean ecosystems.

Peter JS Jones, Reader in Environmental Governance, UCL and Rick Stafford, Professor of Marine Biology and Conservation, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Police forces must take firm and unified stance on tackling sexual abuse of position

Clickmanis/Shutterstock

Fay Sweeting, Bournemouth University

PC Stephen Mitchell of Northumbria Police was jailed for life in 2011 for two rapes, three indecent assaults and six counts of misconduct in a public office, having targeted some of society’s most vulnerable for his own sexual gratification. The case prompted an urgent review into the extent of police sexual misconduct and the quality of internal investigations. One of the recommendations required forces to publicly declare the outcomes of misconduct hearings.

A review of police sexual misconduct in the UK by HM Inspectorate of Constabulary revealed on average 218 cases a year between 2014 and 2016, or around one case per 1,000 officers. A follow-up report from last year shows 415 cases over the following three years, an average of around 138 a year.

But while these serious crimes are still relatively rare, sexual misconduct is a serious matter with implications for the public’s view and trust of the police as an institution. In many cases, the officers’ actions have potential to re-victimise those who are already victims of domestic abuse or rape. Such abuse of position is also likely to be under-reported, with victims fearing they will not be believed.

Compared to other forms of police corruption, sexual crimes committed by serving officers is under-researched, with the majority of existing research focusing on the US and Canada. I am police officer conducting PhD research on sexual misconduct among police officers and barriers to reporting sexual misconduct. In a new paper, my colleagues and I sought to explore the situation in England and Wales by examining the outcomes of police disciplinary proceedings.

Analysing documents from 155 police misconduct hearings, we identified eight different behaviours:

  1. Voyeurism – for example using a police helicopter camera to observe women sunbathing topless in their private gardens.
  2. Sexual assaults, relationships or attempted sexual relationships with victims or other vulnerable persons. While the national figures show some 117 reports of sexual assaults by police officers, the disciplinary hearings we studied featured primarily cases of professional malpractice through consensual but inappropriate relationships that fell below the threshold of criminal behaviour.
  3. Sexual relationships with offenders. Similarly, while the data was heavily sanitised for publication there were only a very small number of cases where assault was involved. In most cases, these were consensual relationships, albeit inappropriate ones.
  4. Sexual contact involving juveniles, including the making of or distribution of pornographic images of children.
  5. Behaviour towards police officers, including sexual assaults on colleagues and sexually inappropriate language and behaviour.
  6. Sex on duty, chiefly between colleagues or officers and their partners.
  7. Unwanted sexual approaches to members of the public – for example, pressuring a member of the public who is not a victim or witness for their phone number and then sending sexually inappropriate messages.
  8. Pornography, such as posting intimate images of former partners on revenge porn sites and, in one case, using a police camera to record a pornographic film.

It’s useful to see how the offences in England and Wales differ compared to the US and Canada. For example, US researcher Timothy Maher defines what he calls “sexual shakedowns”, a category of offence not recorded in the UK, where an officer demands a sexual service, for example in return for not making an arrest.

This is particularly prevalent in cases involving sex workers, and also other marginalised women such as those with low education levels, or those experiencing homelessness, drug and alcohol abuse or mental health issues. In a US study of women drawn from records of drug courts, 96% had sex with an officer on duty, 77% had repeated exchanges, 31% reported rape by an officer, and 54% were offered favours by officers in exchange for sex.

When US officers targeted offenders for sexual gain, it was often for the purpose of humiliation or dominance – an unnecessary strip search, for example. On the other hand, our research indicates the problem in the UK is more of officers targeting vulnerable victims or witnesses in order to initiate a sexual relationship.

Unhappy woman with face in hands
Women who are already suffering domestic violence are often among those police officers have had inappropriate sexual relationships with, considered an abuse of position.
Mark Nazh/Shutterstock

The most common sexual offences by officers

We found the most common type of sexual misconduct was officers having sexual relationships with witnesses or victims, accounting for nearly a third of all cases. Many of these victims had histories of domestic abuse, substance abuse or mental illness, making them highly vulnerable.

In general, the victims revealed many of the same risk factors as those found in people targeted by sex offenders. There are also similarities between the actions of these police officers and similar offences by prison officers or teachers, who are also more likely to select victims they believe are easily controllable and less likely to speak out.

The second most common type involved the way police officers treated their colleagues – most often a higher-ranking male officer towards a lower-ranking or less experienced female officer. Generally, higher ranking officers have less contact with the public and more contact with staff, which may at least partially explain this finding. But in the US and Canada this type of sexual misconduct is more likely to be directed towards a colleague of the same rank.

As in the US, we found that the vast majority of officers involved in sexual misconduct are male. For the handful of female officers in our sample, almost all were involved in sexual relationships with offenders. Hearing documents do not provide in-depth information, and in media coverage – such as that of PC Tara Woodley, who helped her sex offender partner evade police – it is harder to understand who held the power and control in these relationships.

Misconduct hearings, with variable results

The outcomes of sexual misconduct hearings differed, with officers more likely to be dismissed for having sex with victims in forces from the south of England than in the north, while officers having sex on duty were more likely to be dismissed in the Midlands. Officers above the rank of sergeant were more frequently dismissed than constables, suggesting there is less tolerance of misconduct for those of higher rank. Compare this to similar cases in the NHS, where nurses involved in sexual misconduct are more likely to be struck off than doctors.

Our findings suggest that police forces in England and Wales are taking sexual misconduct seriously, with 94% of all cases leading to formal disciplinary actions, and 70% leading to dismissal. But the variation of outcomes across the country is a concern, and there is evidence of misconduct hearing panels not following the College of Policing’s guidance, as seen in a recent case of racist comments by West Midlands police officers.

I believe that the majority of my colleagues uphold the moral and ethical values expected of them, but more needs to be done. The HM Inspectorate of Constabulary’s report from last year argues that police forces are not moving quickly enough to deal with the issue, citing lack of investment, training and poor record keeping. There can be no place in the police for those who would abuse their position.

Fay Sweeting, PhD Candidate in in Forensic Psychology, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: #Manifestation – some businesses use this new age spirituality to hold employees accountable

Diana Simumpande/Unsplash, FAL

Melissa Carr, Bournemouth University and Elisabeth Kelan, University of Essex

Manifestation is the latest viral trend on social media, presented as a way to create – or physically manifest – your own reality through carefully monitoring thoughts, beliefs and feelings.

The hashtag #manifest has over a billion views on TikTok alone. In these posts, advice is offered on manifestation as the mechanism to achieve the life you want, whether it is money, happiness, the body you desire, or exam grades. Techniques to manifest involve imagining something has already happened, visualising it, writing it down, and using positive language such as “I have” rather than “I want”. To be successful at manifestation, belief and positivity are key.

For those that believe, manifestation makes everything achievable, and social media users have plenty of advice about how to do this. Popular examples of these techniques include the 369 method where, by writing down a name three times, an intention six times, and an outcome nine times, it is possible to manifest someone back into your life.

This idea of manifestation is based on new age philosophy dating back to the early 19th century. Its influence is found beyond TikTok – it has entered many workplaces under the guise of self-help.

#Manifest the life you want

Manifestation draws on a long-favoured new age philosophy of universal inter-relatedness: the belief that everything in the universe is related in a network without a deity at the centre. This gives rise to the belief that with positive thoughts and visualisation, people can create their own reality through the laws of manifestation, where an external force – the universe – responds to these thoughts.

The idea is that if you are negative, you invite negativity into your life. But if you desire something, by writing it down or visualising it as if has already happened, you can make these dreams a reality. As bestselling author Louise Hay explains: “I believe that everyone, myself included, is 100% responsible for everything in our lives … we create our experiences, our reality and everyone in it.”

Manifestation is more popularly referred to as the law of attraction, which gained a wider audience in the self-help book and associated film The Secret. Now, it has become part of a wider trend within organisations requiring people to see mental, physical and spiritual well-being as a prerequisite to successful leadership, whether through mindfulness, meditation or active visualisation.

Chip Wilson, founder of the aspirational yoga brand Lululemon, for example, has written that The Secret is “the fundamental law Lululemon was built on”. Employee training at the company incorporates aspects of the law of attraction, and its merchandise uses slogans promoting self-empowerment through yoga and spiritual enlightenment.

The movement of new age philosophies into business settings is something we have traced in our research.

Neoliberal spirituality

Network marketing organisations, sometimes referred to as direct sales or multi-level marketing, are companies where freelance distributors sell products direct to the consumer. The most well-known would be companies like Amway, Herbalife or Avon. We were interested in this form of organisation as they tend to be dominated by women, and the industry is notoriously precarious. Most distributors fail to make a living wage. To be successful, they must both sell volume and recruit other distributors to their teams.

We have been researching one such network marketing company and found that law of attraction was ingrained in its organisational culture. It was used at training events; where distributors were warned that negative thoughts would send out energy into the universe, subsequently attracting poor sales. It was also used by distributors who sold via social media platforms. On social media, the Law of Attraction was explicitly mentioned. People shared how they had manifested sales or new people into their lives, whom they could sign up as distributors.

Distributors were told by their seniors that by being kind and grateful, the universe would reward them. Success was attributed to hard work combined with sending out the right type of energy as a frequency to attract back success. Any negative thoughts in the workplace were discouraged.

We see this as a form of neoliberal spirituality. Under neoliberalism, responsibility moves from the state to individuals, who are held responsible for their own success or failure. Under the law of attraction, individuals – or employees – are held solely responsible for the ability to manifest the future they want.




Read more:
McMindfulness: Buddhism as sold to you by neoliberals


The message in the network marketing company was clear: if you aren’t achieving success, you are not manifesting hard enough. This obscures structural inequalities and, in the company we studied, the reality of precarious labour in network marketing.

Personal culpability

The law of attraction represents a powerful set of “rules” about how to behave and think. This operates as a form of self-surveillance and control, and shifts the blame for lack of financial success away from the employer and on to the employee. But suppressing negativity and being positive means that employees are not able to call out any realities and challenges of their work.

While the law of attraction can, on one level, be seen as a way to maintain wellbeing through encouraging positive thoughts, it also has a toxic side-effect of spiritual rules and self-blame.

The COVID-19 pandemic has created a sense of anxiety and instability. There has been a massive increase of mental health issues, particularly for generation Z and millennials.

For TikTok users, believing they can #manifest their goals represents a way to gain control. But if subscribers to this philosophy are unable to manifest their dreams, they fail both in their goals and spirituality through being unable to harness the universal laws. These forms of spirituality are hard to challenge, and as we saw in our research, those that did try were labelled as being negative and toxic.

Melissa Carr, Senior Lecturer in Leadership Development, Bournemouth University and Elisabeth Kelan, Professor of Leadership and Organisation, University of Essex

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: What 16,000 missing coronavirus cases tell us about how the UK is handling the pandemic

The temporary loss of data relating to 16,000 positive cases of COVID-19 has raised serious concerns about the operation of the UK’s test and trace system. The NHS body responsible, Public Health England, blamed a technical glitch and said cases were added to the system immediately after the problem was spotted. Despite this quick action, many thousands of people have been affected because they were not warned about their contact with an infected person as soon as they could have been.

Most of us would agree that human life is sacred and that COVID-19 deaths should be minimised, if not eradicated. On this basis, we could argue that the Test and Trace system has, until now, shown some serious flaws. However, we are living in a time of great social, medical and personal uncertainty and this must be taken into account.

Across the globe, all governments have faced the same problem: no one is sure what we are dealing with in terms of severity, spread, impact, solutions, and a whole range of previously unencountered problems. In response, we can say that no government has the correct answer because everything is so uncertain.

A crowd of people enters Oxford Circus underground station next to a sign warning them to maintain a social distance of 2 metres.
16,000 cases of coronvairus were left out of official figures for England. Kirsty O’Connor/PA

Managing uncertainty

Uncertainty is a concept familiar to scientists and the medical profession, but less popular with governments and voters. At the beginning of the pandemic, everyone was uncertain of the total number of COVID-19 deaths that would happen across the globe. Even today, no one knows. So governments face the challenge of trying to make popular decisions when the true facts are not known. In turn, the desire for certainty affects policy decisions, which impacts voter opinions and election outcomes.

Systems like the UK’s Test and Trace programme are designed to reduce uncertainty by collecting more information, analysing the growing dataset and helping the government, the NHS and the public better understand the risks. When the system failed to include 16,000 known cases, an opportunity to reduce uncertainty was missed. If the affected individuals had been given the information that they had been in contact with an infected person, then they would have better information about their own probability of catching the virus.

COVID-19 has stimulated a range of different policy choices across the globe. At one extreme, New Zealand is pursuing a policy of complete certainty: the goal is zero COVID-19 cases. At the other extreme, Sweden’s lax approach leaves many citizens unsure if they will become ill. In between, UK policy is like a pendulum, swinging back and forth between more controls and more freedom, trying to respond to the inevitable balance between certainty and uncertainty that all countries face.

People on a street in Stockholm.
Sweden has taken a remarkably lax approach to the virus. Fredrik Sandberg/EPA

While the general perception is that governments are trying to fight the pandemic with differing degrees of success, from an analytic perspective the truth is different: politicians are focusing on trying to create policy certainties during a time of immeasurable medical uncertainty. In this situation, there will always be errors, mistakes, unforeseen consequences, bunglings, confusions and wrong steps.

Because no one really knows what will happen with COVID-19 in the coming months and years, the UK Test and Trace system was a political compromise. Like the idea of total eradication from a new vaccine, the system will never give the population complete certainty in terms of risks, cases and personal health status. COVID-19 is just too complex to be managed by an information system alone. But if reports are correct, analysts in the NHS should have known that their Test and Trace system was too data-rich to rely upon the manual use of Excel to record patient COVID-19 data. On this ground, the NHS has failed to manage its system properly.

Worrying times

The Test and Trace system will never work with the certainty that politicians promise. The virus is too chaotic when it travels through society; managing health services effectively is notoriously difficult and information systems are famed for failing to deliver as promised. Instead, perhaps the message should be that the situation is complex and messy, but an imperfect system is better than nothing. Therefore, the 16,000 cases are not a failure of the system, but an expected uncertainty that is just a sign of worrying times.

Although we must be realistic about what is possible in the current pandemic, there are definite lessons to be learnt from the Test and Trace fiasco. First, the government should manage expectations and explain that systems fail, especially when they are new. Next, the government is clearly working outside its comfort zone in dealing with the pandemic and urgent action should be taken on what policy is trying to achieve. Finally, it’s clear that the NHS doesn’t have enough people with the right analytical skills to run a modern health system in these troubling times.

Whilst there may be many different opinions on what to do next, there is one thing we must face: the uncertainties created by COVID-19 are real and no policy, however well designed, will make them go way any time in the foreseeable future.

Professor of Health Economics, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article – Fossil footprints: the fascinating story behind the longest known prehistoric journey

A prehistoric woman with a child have left behind the world’s longest trackway.
Author provided

Matthew Robert Bennett, Bournemouth University and Sally Christine Reynolds, Bournemouth University

Every parent knows the feeling. Your child is crying and wants to go home, you pick them up to comfort them and move faster, your arms tired with a long walk ahead – but you cannot stop now. Now add to this a slick mud surface and a range of hungry predators around you.

That is the story the longest trackway of fossil footprints in the world tells us. Our new discovery, published in Quaternary Science Reviews, comes from White Sands National Park in New Mexico, US, and was made by an international team working in collaboration with staff from the National Park Service.

The footprints were spotted in a dried-up lakebed known as a playa, which contains literally hundreds of thousands of footprints dating from the end of the last ice age (about 11,550 years ago) to sometime before about 13,000 years ago.

Unlike many other known footprint trackways, this one is remarkable for its length – over at least 1.5km – and straightness. This individual did not deviate from their course. But what is even more remarkable is that they followed their own trackway home again a few hours later.

Photo showing the footprints.
A section of the double trackway. Outward and homeward journeys following each other. Central Panel: Child tracks in the middle of nowhere. Left Panel: One of the tracks with little slippage.
M Bennett, Bournemouth University., Author provided

Each track tells a story: a slip here, a stretch there to avoid a puddle. The ground was wet and slick with mud and they were walking at speed, which would have been exhausting. We estimate that they were walking at over 1.7 metres per second – a comfortable walking speed is about 1.2 to 1.5 metres per second on a flat dry surface. The tracks are quite small and were most likely made by a woman, or possibly an adolescent male.

Mysterious journey

At several places on the outward journey there are a series of small child tracks, made as the carrier set a child down perhaps to adjust them from hip to hip, or for a moment of rest. Judging by the size of the child tracks, they were made by a toddler maybe around two years old or slightly younger. The child was carried outward, but not on the return.

We can see the evidence of the carry in the shape of the tracks. They are broader due to the load, more varied in morphology often with a characteristic “banana shape” – something that is caused by outward rotation of the foot.

Colour depth rendered 3D scans of some of the footprints. Note the distinctive curved shape which seems to be a feature of load carrying.
Bournemouth University., Author provided

The tracks of the homeward journey are less varied in shape and have a narrower form. We might even go as far as to tentatively suggest that the surface had probably dried a little between the two journeys.

Dangerous predators

The playa was home to many extinct ice age animals, perhaps hunted to extinction by humans, perhaps not. Tracks of these animals helped determine the age of the trackway.

We found the tracks of mammoths, giant sloths, sabre-toothed cats, dire wolves, bison and camels. We have produced footprint evidence in the past of how these animals may have been hunted. What’s more, research yet to be published tells of children playing in puddles formed in giant sloth tracks, jumping between mammoth tracks and of hunting and butchery.

Between the outward and return journeys, a sloth and a mammoth crossed the outward trackway. The footprints of the return journey in turn cross those animal tracks.

The sloth tracks show awareness of the human passage. As the animal approached the trackway, it appears to have reared-up on its hind legs to catch the scent – pausing by turning and trampling the human tracks before dropping to all fours and making off. It was aware of the danger.

In contrast, the mammoth tracks, at one site made by a large bull, cross the human trackway without deviation, most likely not having noticed the humans.

The trackway tells a remarkable story. What was this individual doing alone and with a child out on the playa, moving with haste? Clearly it speaks to social organisation, they knew their destination and were assured of a friendly reception. Was the child sick? Or was it being returned to its mother? Did a rainstorm quickly come in catching a mother and child off guard? We have no way of knowing and it is easy to give way to speculation for which we have little evidence.

What we can say is that the woman is likely to have been uncomfortable on that hostile landscape, but was prepared to make the journey anyway. So next time you are rushing around in the supermarket with a tired child in your arms, remember that even prehistoric parents shared these emotions.

Matthew Robert Bennett, Professor of Environmental and Geographical Sciences, Bournemouth University and Sally Christine Reynolds, Principal Academic in Hominin Palaeoecology, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Five ways to increase protein intake as we age

Two older women eat lunch together.

Protein is an essential part of a healthy diet. It helps us build and maintain strong muscles and bones, helps us better recover from illness and injury, and reduces likelihood of falls and fractures. But, as we age, many of us don’t get enough protein in our diet. This is partly because our appetites diminish naturally as we get older. Convenience, effort, and value for money, are also reasons that older adults may not get enough protein.

However, protein is extremely important as we age. This is because our bodies become less able to convert the protein we eat into muscle and other important biological factors that help us better recover from illness and injury – so we actually need to eat more protein as we get older.

Here are five tips to help you get enough protein in your diet as you age.

1. Add sauces and seasonings

Research shows that the taste and flavour of high-protein foods can encourage older adults to consume more of them. And taste and flavour are easily added with sauces and seasoning.

In studies where we have offered older adults a hot chicken meal either with or without sauce or seasoning, we find more chicken was eaten from the meals with sauce or seasoning compared to plain meals. Meals with sauces and seasonings were also rated as more pleasant and tastier than the plain meals.

Adding sauces and seasonings to meals can increase the consumption of high-protein foods. Participants also subsequently ate equal amounts of protein at the next meal following flavoured meals and plain meals, meaning that their protein intake was increased overall.

2. Add cheese, nuts or seeds

Some foods that add flavours are naturally high in protein themselves. Good examples are strong cheeses – like blue cheese – as well as nuts and seeds.

As well as protein, cheese is full of calcium and other micronutrients, including Vitamins A, D and B12, which also help maintain strong bones. Cheese can be easily added to soups, salads, pasta or mashed potatoes.

Nuts and seeds can be added to breakfast cereals, salads and desserts such as yoghurts, and can provide an interesting texture as well as added flavour. Nuts and seeds are good sources of plant-based protein, and are also high in healthy fats, fibre, and many vitamins and minerals, and can reduce risk of many chronic conditions, such as cardiovascular disease and type 2 diabetes. However, nuts and seeds may not be suitable for everyone (as they can be difficult to chew), but cheese is soft and full of flavour.

3. Eat eggs for breakfast

Breakfast meals tend to be low in protein – so eating eggs for breakfast is one way to boost protein intake.

Our recent study found egg intake could be increased by providing people with recipes and herb or spice seasoning packets that increased the taste and flavour of eggs. We gave participants recipes that used both familiar and exotic ingredients, from a variety of countries, for dishes that required a range of preparation methods. Egg intakes increased after 12 weeks by 20%, and were sustained for a further 12 weeks in those who had received the recipes.

Fried eggs in a pan.
Eggs for breakfast are an easy way to get more protein. Mary Volvach/ Shutterstock

Eggs are a nutritious source of protein, and are typically easy to prepare and chew, good value for money and have a long shelf life. Egg dishes can also add taste and flavour to the diet. However, eggs may not be suitable for everyone (including those with certain diagnosed conditions), but for most people egg consumption is considered safe.

4. Make it easy

Try to make cooking as quick and easy as possible. Many types of fish are available that can be eaten directly from the pack, or simply need heating – such as smoked mackerel or tinned sardines. Fish is also full of many vitamins and minerals, as well as omega-3 fatty acids (which are present in oily fish like salmon) which is good for heart health. To allow easier and quicker cooking, purchase meat that is pre-cut, pre-prepared or pre-marinated, or fish that has been deboned and otherwise prepared, and then make use of your microwave. Fish can be very easily and quickly cooked in the microwave.

Beans, pulses and legumes are also easily bought in cans and ready-to-eat, and are all rich sources of protein for those who wish to consume a more plant-based diet. They also contain fibre and many vitamins and minerals, and can protect against many chronic conditions including cardiovascular disease, diabetes and some cancers.

5. Eat high-protein snacks

Many people reach for biscuits or a slice of cake at snack time, but try eating a high-protein snack instead next time. Many high-protein foods are already prepared and easy to consume. Some examples include yoghurts or dairy-based desserts – such as crème caramel or panna cotta. Yoghurts and other dairy-based desserts can offer many health benefits, including improved bone mineral density, as necessary for strong bones. Nuts, crackers with cheese, peanut butter, or hummus are also great choices.

Inadequate protein intake can result in poor health outcomes, including low muscle mass and function and decreased bone density and mass, leading to increased risk of falls, frailty, and loss of mobility. To avoid these harms, researchers currently recommend consuming 1.0-1.2g protein per kilogram of bodyweight for older adults compared to 0.8g of protein per kilogram of bodyweight for all adults.

 

Professor of Psychology, Bournemouth University

Lecturer of Psychology, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Nurses are on the coronavirus frontline, so why are they being left out of the response?

More than 600 nurses worldwide have died from COVID-19 during the pandemic. This should not be a surprise: we are the largest group of healthcare workers in the world, dedicated to preventing the spread of coronavirus, and we are also engaged in caring for those who are suffering.

But although we are on the frontline of this crisis, nurses are too often being left out of responses to the pandemic.

Uniquely at risk

In the UK and other countries with high rates of coronavirus deaths, there are increasing inequalities in health outcomes for different income groups. In England and Wales, the mortality rates from COVID-19 in the most deprived areas are more than double the least deprived.

In general, the risk of ill health increases for people who live on a low income. Common health issues that affect these groups include high blood pressure, coronary heart disease, lung disease, type 2 diabetes and obesity. All of these put people at higher risk of becoming sicker and dying from COVID-19. Death rates are highest among people from Black, Asian and minority ethnic backgrounds.

These communities are also disproportionately represented among nursing staff some of whom are living on the lowest wages.

Lacking equipment

Nurses working in hospitals, care homes and within communities are often put at greater risk from COVID-19 because they have not been given adequate personal protective equipment, or PPE.

A study of nearly 100,000 health workers in the UK and US found that people working on the frontline of the coronavirus pandemic were three times more likely to test positive for the disease than the general community. Health workers from a Black, Asian or minority ethnic background were found to be five times more likely to test positive than white people who did not work in healthcare. Workers who reported a lack of adequate PPE in their healthcare institutions were at greater risk still.

Another study by the UK’s Royal College of Nursing, meanwhile, found that more than half of Black, Asian and minority ethnic respondents have felt pressure to work without the correct PPE compared to just over a third of other respondents. These groups were also asked to reuse PPE more frequently than their white counterparts.

Denied a voice

It’s a painful irony that as nurses battle against the coronavirus pandemic, 2020 is the World Health Organization’s Year of the Nurse and Midwife which was supposed to raise the profile and perceptions of nurses globally.

But the response to the pandemic in the UK has starkly shown that our expertise and experience as a profession is not being called upon and our potential is not recognised. We are the biggest work force for health in the UK working in hospitals, care homes and community settings to care for those with COVID-19 and help prevent its spread yet we have no representation on the official scientific advisory group (SAGE), which advises the government on its coronavirus response. Nor are we represented on the rival Independent SAGE group.

Our role in policy development and planning is negligible despite the invaluable insights our unique position in health systems gives us. Our lack of representation and reward means that we are also suffering from the impacts of inequalities along with those we care for.

Given the chance, nurses could help guide coronavirus policy in a number of ways. First, by being a witness to the health impacts of COVID-19 on our local communities and staff, recording and researching inequity of access to services. Second, we can advise on how to provide prevention and treatment resources to those most at risk. Finally, we can set a positive example in terms of equality of opportunity, fair working conditions, protection from infection and pay. This could start with ensuring equal provision of PPE for all staff.

Nurses are at the forefront of trying to reduce existing health inequalities which are being made worse by COVID-19. We are also victims of those inequalities – a feminised, racialised workforce dealing with poor conditions and lacking a political voice. Care and prevention of disease are not perceived as being as important as finding a cure or a vaccine, but in the global recovery from COVID-19, all these elements are equally vital.

We have already lost too many colleagues in the fight against this disease. It’s time our work is recognised and we are given an official voice to help us all recover from the coronavirus pandemic.

Ann Hemingway, Professor of Public Health, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: How Airbnb got its IPO plans back on track

Airbnb is gearing up for its long-awaited IPO.
Shutterstock.com

Michael O’Regan, Bournemouth University

It’s been a rollercoaster year for Airbnb and its much-anticipated plans for an initial public offering or IPO. The home sharing platform had planned to file back in March to go public but then coronavirus hit and its revenue nose-dived.

Now, it looks like plans are back on track. Airbnb confidentially filed its IPO paperwork with the securities and exchange commission in mid-August. None of the financial specifics were revealed but the company was valued at US$18 billion in its last funding round in April, which is a long way down from its previous 2017 valuation of US$31 billion.

Of course, like the entire tourism industry, the coronavirus pandemic has had an enormous effect on Airbnb’s finances. New bookings stopped, cancellation rates soared, refunds to hosts and guests cost millions and revenue fell, even as cost cutting measures like layoffs were implemented. To help mitigate this, it was forced to fundraise US$2 billion in debt and equity securities in April 2020 with onerous terms.

So the decision to file its IPO paperwork and potentially list in 2020 was surprising to some. Critics point to the ongoing pandemic and the many issues it continues to throw up: the hosts and guests that have been angered by changing cancellation policies, new laws and regulations in cities seeking to reclaim housing for locals, as well as the falling revenue and ongoing losses. Others point to the lacklustre IPOs from sharing economy bedfellows Uber and Lyft in 2019, not to mention WeWork’s fall from grace.

Reasons to IPO

But there are lots of reasons to go public, including pressure from employees (shares held by early employees will expire this year). But another big motivation is the fact that Airbnb has rebounded better than its competitors from coronavirus. Booking rates were above expectations from June 2020 onwards and the Airbnb model could take advantage of changing host and tourist behaviour during the pandemic.

The company’s overheads are far less than the hotel sector due to its limited fixed costs. It also took advantage of the rise in domestic staycations in rural locations across the globe, and the increased demand for countryside retreats where people could safely socially distance. Unlike hotels, short-term rentals tend to facilitate longer stays and can offer full-service amenities, living space, and gardens. Research shows that the more spacious environments of short-term lets have been popular with holidaymakers and people wanting to work from home elsewhere.

Despite broad marketing cuts to reduce losses, Airbnb has strong brand recognition through past campaigns like “Don’t go there. Live there” that tapped into people’s desire to not just visit a place but have a more authentic experience of it. This helped it become the go-to platform for short-term rentals during the pandemic.

Hosts in rural areas also responded to the demand by listing. Meanwhile urban hosts responded by switching their properties to private rental, or dramatically reducing prices.

Airbnb logo held by a hand in front of wooden hut in countryside.
Rural retreats have risen in popularity.
AlesiaKan / Shutterstock.com

While the broader tourism and hospitality sector is weak, perhaps Airbnb sees this stage of the pandemic as its time to shine and push ahead with its IPO. Plus, stock markets in the US are on a record high, fuelled by stimulus from Washington.

Questions remain

Questions remain for Airbnb, however. In particular, when will travel behaviour revert to business as usual, if ever? This will determine whether current bookings growth will lead to profitability.

Then there are the safety issues that have dogged the company for years and played a big role in Airbnb’s loss of profitability in 2019. It spent US$150 million on safety initiatives, including verifying the accuracy of listings, creating a 24/7 safety hotline and even tied employee bonuses to safety.

There is also the threat of more tax and regulation in major markets, which could emerge as authorities seek new revenue to pay for the effect of coronavirus on their economies. The basis of the favourable market conditions are also open to question, as there is concern that the current strength of the stock markets isn’t based on strong economic fundamentals and is a bubble that’s waiting to burst.

Success in the tourism industry is never a given. Airbnb will be all too aware of this, having totally disrupted the hotel industry. Airbnb has more than 7 million listings – dwarfing the largest hotel chain, Wyndham Worldwide, which has 8,000 hotels. But rather than seeing this as a burden, Airbnb is capitalising on it.

But for all its market positioning as a different kind of travel provider – one that offers unique, authentic and personalised experiences – Airbnb still sits firmly with the tourism sector. Like its competitors, its success still depends on post-pandemic travel rebound.

Michael O’Regan, Senior Lecturer in Events and Leisure, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Why the UK government is paying social media influencers to post about coronavirus

biD LTasgY/unsplash

Elvira Bolat, Bournemouth University

Social media influencers are often seen as lazy freelancers who make a living being paid to pretend they like products. But these “celebrities’” are more than just marketing vehicles. If used properly, they can be effective agents of positive social change.

Yet the UK government has taken a bold step by working with influencers to try to stop the spread of coronavirus. It has paid several social media influencers and reality TV stars to promote the NHS test and trace service – the system used when someone tests positive for COVID-19 to work out who else might be at risk after coming in contact with them. The service relies on local public health teams contacting those that may be potentially infected to ask them to self-isolate and test for the virus. However, to date, the service is failing to deliver. This is for many reasons, one of which is the public’s reluctance to share their contact details.

When the system failed to reach its target for the ninth week in a row, the government decided to change strategy. This is when it brought in social media players such as Love Island stars Shaughna Phillips, Josh Denzel and Chris Hughes. Phillips, who has 1.5 million followers on her Instagram, posted a photo of her with a friend, reminding her followers that “the best way for us all to get back to doing the things we love” is by getting tested for coronavirus. She reminded fans that the test and trace service is “totally free, quick and is vital to stop the spread of coronavirus” and told them about her experience of using the testing service.

Phillips, just like other influencers involved in this campaign, was paid for her posts. While the government hasn’t revealed how much was spent on the campaign, it claims “over 7 million people have been reached” with the messages.

Typically a mega influencer who has more than a million followers will be paid around £10,000 per post so, of course, there was a debate about whether taxpayers’ money should be used in this way.

However, the right public health messaging doesn’t always reach young people. They are often less engaged with mainstream traditional communication channels such as TV, radio and press. Paying popular influencers to promote credible public health messaging is a genuine alternative if the government wants to reach young people.

Powerful but ordinary

The impact social media influencers have – on young people in particular – is beyond doubt. And their clout is particularly strong now that we’re spending more time at home online.

Of course, their power is most readily associated with commercial interests. The rise of the influencer has transformed the beauty and fashion industries beyond recognition. Finding the right star to endorse your product on their Instragram or TikTok feed, can make or break a brand these days.

They achieve these results by presenting themselves as an approachable “friend” to their social media followers. They have a greater than average potential to influence others because they build a special, intimate bond with their followers by posting content very regularly and communicating with their audience directly. When a fan leaves a comment on an influencer’s post and receives a reply, they feel like they have a relationship with them, which reinforces the influencer’s ability to market products.

In our survey of 465 young people, we found that social media influencers’ content and their “authentic” behaviours are linked to consumers’ tendencies to buy products spontaneously without reflection.

Unlike traditional celebrities, who often keep their private lives behind closed doors, social media influencers discuss personal experiences, good or bad, with their followers. They see such sharing as more sincere and trustworthy than content coming from elsewhere.

Beyond these commercial activities, however, influencers have more recently been seen pushing followers to engage with social issues. Audiences are interested in influencers who engage in activism and who take a stand on issues. This has been particularly in evidence during the Black Lives Matter movement, when fans looked to social media stars for meaningful statements and positions and even demanded it of them when they were not forthcoming.

In our work around relationships between influencers and followers, we have found that many young people are interested in social media stars who seek to drive change rather than just sell products. This, combined with the personal approach, is what makes influencers an attractive prospect for a government trying to reach young people. If someone like Phillips talks about test and trace on Instagram, young people are likely to react and act.

The World Health Organization has been using influencer marketing techniques in its coronavirus messaging since April. It has gone a step further by using a CGI influencer called Knox Frost to “get accurate, vetted information about COVID-19 in front of millennials and Gen Z”. The computer-generated 20-year-old has been posting to just under a million Instagram followers about coronavirus safety and raising funding for the WHO.

In times when the economy is suffering, many might question why the UK government is paying social media stars to promote test and trace services. In reality, spending of this kind has enormous potential to deliver a positive impact. As our studies show, influencers are powerful in shaping the behaviour of their followers. Until now, this was mainly done in the commercial sphere to drive consumption, but now we are seeing more positive uses for their high profiles.

Elvira Bolat, Principal Academic in Marketing, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: the PPI scandal is far from over – here’s why

Shutterstock/kamui29

Julie Robson, Bournemouth University

The PPI scandal led to the largest consumer redress scheme in British history, with over £38 billion paid to claimants to date. The deadline for customers to submit their claims was set at midnight on August 29 2019. But, almost one year later, hundreds of thousands of registered claims remain outstanding. And to make matters worse for the banks, a swathe of new claims have started rolling in.

The Financial Conduct Authority (FCA) hoped the deadline would bring the scandal to an orderly conclusion and offer protection to consumers while helping to restore market integrity. The banks hoped it would enable them to draw a line under it and move on. But the situation seems to be getting worse.

The problem now comes in the form of unfair commission payments. PPI commission rates were deemed to be unfair for two main reasons: when they were too high or when they were kept secret.

When they were too high they accounted for, on average, 67% of the PPI price. In the most serious cases they accounted for 95% of the cost of a PPI policy.

When secret, they were (obviously) undisclosed to the customer. That customer – had they been better informed – may have queried the value of their PPI policy. Especially if they had they known that the majority of the price was not going to the product provider (for example, the insurer underwriting the protection cover for the loan or credit card) but to the bank who sold the PPI policy to them.

Court judgements

Awareness of the unfair commission payments on PPI policies is not new. But recent court decisions mean that customers can potentially claw back all of the commission they have paid and claim after the 2019 deadline.

The issue first came to light in the November 2014 Supreme Court case, Plevin v Paragon Personal Finance Ltd, after which the FCA changed its guidance on what could be claimed as part of the PPI redress scheme. This change enabled customers to claim commission that accounted for over 50% of the price of the PPI policy and became known as the Plevin rule.

Payments to customers were however restricted to commission that was in excess of 50%. In other words, successful claimants only received part of the commission that had been paid to the banks.

A series of other court cases saw the position change again, as claimants were awarded the full commission where the bank failed to disclose large commission payments to the customer. As almost all PPI policies earned high commission rates, this change was significant and opened the floodgates to new claims.

Customers who have received a partial payment, have had their claims rejected or have not claimed so far can now claim, citing the unfair compensation. Even customers who were not mis-sold PPI and were happy with their policy can potentially claim as the high commission payments may not have been disclosed to them.

The potential for new PPI claims based on the unfair commission payments could not have come at a worse time for the banks as they are still facing a backlog of existing claims to process. A survey conducted in March this year found that 60% of PPI claimants had not heard from their bank about the progress of their claim and half of these had not even received an acknowledgement letter.

Banks were overwhelmed by the volume of claims and although the expected time for banks to respond to such claims is typically eight weeks, the FCA managed this expectation by predicting that most claims would be resolved by summer 2020.

Coronavirus disruption

But this deadline was set before COVID-19 disrupted the world and it now appears unlikely to be met. Now many customers remain frustrated that their cases have not been resolved as the new unfair commission charges issue further aggravates and complicates the issue.

The original PPI scandal severely damaged consumer trust in the banks as a lack of integrity was at the heart of the case. PPI mis-selling was something that the banks could have controlled and was an intentional act as the banks placed profits above their customer welfare.

My own research has shown that when trust is damaged by a lack of integrity, it is difficult to restore. The banks needed to display clear evidence of an intention to get rid of negative influences.

For a start, all banks should have immediately apologised for the mis-selling. Some did, but this was only after they lost a high court case trying to overturn the FCAs ruling on PPI mis-selling. The banks really needed to signal to employees the importance of a customer-centered culture and change employee incentive systems to align with long-term performance, rather than short-term profit.

Banks need to embed ethical values into their routine actions and decisions. So far, the evidence is that not all banks have bothered to take such steps.

Julie Robson, Associate Professor Marketing, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Going green dramatically benefits businesses

Yoyo Dy/Unsplash, FAL

Jagannadha Pawan Tamvada, University of Southampton and Mili Shrivastava, Bournemouth University

The onset of the pandemic and the ensuing lockdown have imperilled businesses worldwide. It will be tempting for firms to put any commitment to the environment in the back seat as they attempt to recover, especially as some governments reduce requirements and undermine environmental protection.

This is short-sighted: businesses do not have to sacrifice their environmental goals for protecting their growth. Greening initiatives like offering green products or services, introducing green processes internally, hiring employees to promote sustainable practices, or going beyond compliance requirements, can actually help firms.

Using data on 9,236 small and medium businesses in 35 countries across Europe and the US, our research suggests that on average, businesses benefit from going green, although the type of greening that gives the most significant benefit may differ between firms.

Here are four main ways that greening can benefit businesses.

1. Innovative market niches

By offering new green products or services, a business is more likely to cater to an emerging trend or niche market, which can make it more competitive. Frugalpac, a UK-based company that makes paper-based packaging for liquids that cut carbon footprints, received a £2 million investment during the pandemic – a time when most other companies were struggling for finance.

Already seeing widespread success for their recycled paper coffee cup, Frugalpac’s innovative paper wine bottle, also made from 94% recycled paper, has led to new opportunities and partnerships.

Companies focused on sustainability can rapidly expand by catering to new niche markets internationally. Consider D’light, a company that offers innovative lighting solutions for people who do not have access to electricity. The company has transformed the lives of more than 100 million people across 70 countries through its green product offerings while raising US$197 million (£150 million) in investment.

Earlier this year, the Danish energy supplier Ørsted, formerly known as Danish Oil and Natural Gas, was named the most sustainable company in the world. This success followed from its transformation to a green energy supplier – which went hand in hand with accelerated profits.

By catering to new niche markets using green products and services, these businesses have emerged as future leaders in their sectors. Of course, not all companies are suited to finding such niches. But sustainability can be promoted in other ways like green working practices and processes, for example.

2. Employee motivation

Job seekers are increasingly attracted to companies that care for the environment. The employees of firms that promote sustainability are more likely to believe that their employer will care for them, and are more satisfied with their jobs.

Such companies create a higher sense of personal and organisational purpose that makes work meaningful. A recent poll shows that millennials and Gen Z’s are more concerned about the environment than any previous generation. This means they prioritise employers who put sustainability at the forefront.

Millenials and Gen Z’s are more worried about the environment than any previous generations.
LinkedIn Sales Navigator/Unsplash, FAL

By some estimates, companies that follow green practices have a 16% boost in employee productivity. Although establishing a direct causal link can be difficult, some of the greenest companies, such as Cisco, Tarmac or Stantec, are also considered the greatest companies by employees.

3. More engagement

Greening initiatives signal to external stakeholders, such as investors and customers, that a business is committed to doing good. This can lead to increased investment, customers and stakeholder loyalty. This is pertinent in the aftermath of COVID-19 as there is heightened awareness about the need to protect the environment.

For example, highly sustainable companies benefit from superior stock market performance in the long run, according to research looking at American companies in the period 1993-2009. Investors are increasingly questioning firms on their commitment to sustainability, and expecting meaningful steps from them for integrating consideration of such issues into their investing criteria. This is reflected by the tenfold increase in global sustainability investment to US$30.7 trillion by April 2019 since 2004.

More recently, Polysolar, a company that makes glazed windows that generate electricity, has secured more than double the investment it sought on crowdfunding platform Crowdcube. And large companies such as Unilever have benefited from increased stakeholder engagement and loyalty by adopting greening practices and products, addressing a dark history of environmental exploitation.

4. Increased efficiency

Greening processes can result in efficiency gains by reducing energy costs, allowing businesses to secure green tax credits, improving operational efficiency, and embedding circular economy principles internally.

Such gains directly translate into commercial benefits. As many as 75% of UK businesses that invested in green technologies subsequently enjoyed commercial benefits, even if financial concerns pose barriers to making these green investments in the first place. For large companies such as Proctor & Gamble, these gains can run into billions of pounds.

Conversely, in cases where businesses harm the environment, they have to be prepared to incur significant costs. A prominent example is the famous case of Volkswagen, which has even adversely impacted the performance of other German car manufacturers like BMW and Mercedes Benz.

For all these reasons, time is ripe for business to go green.

Jagannadha Pawan Tamvada, Associate Professor in Strategy and Innovation, University of Southampton and Mili Shrivastava, Senior Lecturer in Strategy, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: The hidden impact of coronavirus on Gypsy, Roma Travellers

Studio 2/Shutterstock

Vanessa Heaslip, Bournemouth University and Jonathan Parker, Bournemouth University

We know well by now that coronavirus does not affect everyone equally. In England and Wales, Black people are four times more likely die from COVID-19 than white people, while people from a Bangladeshi background are twice as likely. Coronavirus has also had a disproportionate effect on people experiencing poverty.

It’s clear that this disease heightens existing inequalities. Some of the most marginalised people in the UK are Gypsy, Roma Travellers, yet they are often left out of research and outreach programmes.

We do not currently know the rates of death and severe illness among these communities. And without better data about their experiences of COVID-19, the true impacts of the pandemic on Gypsy, Roma Travellers could remain dangerously hidden.

Health inequalities

Gypsy, Roma Travellers are not a homogeneous group, but rather consist of different communities with diverse needs. Even within the same community group, there can be many varied experiences of living through the pandemic depending upon personal, social and environmental factors.

That said, research indicates that the continuing COVID-19 pandemic will be extremely challenging for many individuals within the disparate communities.

The last census in 2011 noted that 76% of Gypsy, Roma Travellers in England and Wales lived in houses or apartments. This offers the least challenging experience, as people have access to basic amenities such as electricity, gas, sanitation and water supplies.

Those living in caravans, however, are likely to experience more difficulties. A 2019 Houses of Commons briefing paper noted there were 22,662 Traveller caravans in England, of which 57% were on private sites, 29% were on local authority sites and 14% were on caravan sites. There are increased challenges for those living on these sites during the pandemic, including accessibility of gas bottles, sewerage and obtaining fresh water. Those living on unauthorised sites experience the most significant problems, especially in accessing suitable sanitation and waste disposal.

Discriminatory policies towards these communities have meant that sites, whether they are provided by a local authority or privately run, are more likely to be located close to motorways, major roads, railways, refuse tips, sewage works and industrial estates, all of which are damaging to the health of people who live there. It is perhaps not surprising therefore, that Gypsy, Roma Travellers have a worse health status than the wider community average, dying between seven to 20 years earlier than the rest of the population.

A review across five regions in England and Wales noted that 66% of Gypsy, Roma Travellers had bad, very bad or poor health. Poor air quality, proximity to industrial sites, asthma and repeated chest infections in children and older people were noted in around half of all interviews undertaken for the review. Health access is incredibly difficult for people in these communities, which means that such problems are often not picked up until much later in the illness trajectory, leading to poorly managed chronic conditions.

As COVID-19 is primarily a respiratory disease, this places Gypsy, Roma Travellers in a precarious position – many will meet the criteria for high or moderate risk.

The impact of social distancing

As well as physical health impacts, we also know that there are mental health consequences that come from the COVID-19 pandemic. These too are likely to disproportionately affect Gypsy, Roma Travellers.

These communities often have a very strong family culture, and many live in large, extended family groups. This culture is an important protective mechanism against the harsh stigma and discrimination they face in wider society.

A desire to roam and travel is also deeply embedded as a core part of the identity of Gypsy, Roma Travellers. The distancing measures enacted in response to coronavirus reduce social contact within communities as well as people’s ability to be nomadic and roam. Both of these factors have implications for the long-term mental health and well-being of people within these communities in which mental ill-health is on the increase.

A lack of data

As well as widespread stigma, a major difficulty in truly understanding the impact of coronavirus on Gypsy, Roma Traveller communities is a lack of systematic data collection.

While Gypsy, Roma Travellers were recognised as a distinct ethnic minority category in the last census, the NHS does not currently incorporate this category into their ethnicity data. As such, individuals are not identified in health services as originating from these communities. Nor are they included as a specific ethnicity in Public Health England’s reports on COVID-19 health disparities. Instead they are merged into the category of “any other white background”.

Unless this is addressed at a national level, the health impact of coronavirus on these marginalised communities will remain hidden.

Vanessa Heaslip, Principal Academic Nursing, Bournemouth University and Jonathan Parker, Professor of Society & Social Welfare and Director of the Centre for Social Work and Social Policy, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Link between autism and eating disorders may be due to an inability to identify emotions

Alexithymia is a personality trait characterised by an inability to identify and describe emotions.
Rawpixel.com/ Shutterstock

Rachel Moseley, Bournemouth University and Laura Renshaw-Vuillier, Bournemouth University

Eating disorders have the highest mortality rates of any mental illness. They don’t discriminate, affecting people of all ethnicities, sexualities, gender identities, ages and backgrounds. However, one group is disproportionately affected by these disorders: people on the autism spectrum.

Eating disorders in autistic people are poorly understood, but they tend to be more severe and long-lasting. The longer a person lives with their eating disorder, the harder it is to recover. This may partly explain why some studies suggest autistic people have a poorer prognosis in therapy.

Longer-lasting eating disorders are associated with a greater likelihood of death. The fact that autistic people are vulnerable to chronic eating disorders, alongside other mental illnesses, may be one reason why they die one to three decades earlier, on average, than non-autistic people.

So why are autistic people more vulnerable to eating disorders? A couple of reasons have been suggested.

Dieting

One general and major risk factor for developing an eating disorder is dieting. For people who might already be genetically vulnerable to eating disorders, dieting seems to kick-start something in the brain that can develop the disorder.

While autistic people aren’t more likely to diet than the average person, certain features of autism – including attention to detail, determination and intense fixated interests – may make them better able to maintain the restrictions needed for long-term weight loss when they choose to diet.

The cognitive rigidity that we see in autistic people may also make it easy for them to get stuck in patterns of eating behaviour, while their preference for sameness may cause them to have a limited diet to begin with. For some autistic people, insensitivity to hunger, gastrointestinal problems and sensitivity to tastes, smells and textures make eating difficult anyway.

Paper bag with frowning face next to empty plate and cutlery.
Certain autism traits may already make eating difficult for some.
ChameleonsEye/ Shutterstock

Moreover, because autistic people are often bullied and socially isolated, dieting and weight loss may give them back a sense of control, predictability, reward and self-worth. Eating disorders may even numb feelings of anxiety and depression.

Alexithymia

A core feature of people with eating disorders is that they find it difficult to identify and cope with emotion. As autistic people struggle with emotions in similar ways, our research team wondered whether this might help explain why they are more likely to have eating disorders.

The personality trait characterised by an inability to identify and describe emotions is called alexithymia. Being alexithymic is like being emotionally colour-blind, and it ranges from subtle to severe. While one alexithymic person might find it hard to pinpoint what emotion they’re feeling, another might notice physical signs such as a racing heart and be able to identify they’re feeling angry or frightened.

Alexithymia is associated with many negative outcomes like suicide and self-injury. In part, this may be because people who cannot identify or express their emotions find it hard to soothe themselves or get support from others.

To see whether alexithymia might contribute to eating disorders in autism, we looked at eating-disorder symptoms and autistic traits in the general population. Autism is a spectrum disorder, so everyone has some level of autistic traits – it does not mean they are actually autistic. Nevertheless, these traits can tell us something about the nature of autism itself.

In two experiments with 421 participants, we found that higher autistic traits correlated with higher eating-disorder symptoms. We also found that higher levels of alexithymia wholly or partially explained this relationship. Our results suggest that having higher autistic traits alongside difficulties identifying and describing emotions may make these people more vulnerable to developing eating-disorder symptoms.

Interestingly, we found differences between male and female participants. While alexithymia was related to eating-disorder symptoms in women, there were no links between alexithymia and eating-disorder symptoms in men. Since the male group was small, however, we couldn’t be sure these findings would hold up in a bigger sample.

Next steps

This research can’t show conclusively that alexithymia causes eating disorder symptoms in people with autistic traits, or indeed autistic people. It might be that the relationships work backwards, and eating-disorder symptoms give rise to alexithymia and to autistic features.

However, first-person accounts from autistic people are consistent with the idea that alexithymia might play a role in their eating disorders. One participant even described how restricting her calorie intake reduced internal sensations that – unknown to her, being unable to identify them – caused her much anxiety.

If supported by further research, these findings have potential implications for treatment. Clinicians already know that therapies need to be tailored for autistic and non-autistic patients, but how best to achieve this is still uncertain. Preliminary research like this may offer some clue by highlighting alexithymia as a potential target. Alexithymia is currently not addressed by clinicians either in autistic people or in those with eating disorders

As there are many negative outcomes associated with being autistic – such as high suicide rates and greater risk of eating disorders – it will be important to explore how much alexithymia, not autism itself, actually contributes to these negative outcomes. Focused interventions to treat alexithymia might potentially reduce these risks.

Rachel Moseley, Senior Lecturer in Psychology, Bournemouth University and Laura Renshaw-Vuillier, Senior Lecturer, Psychology, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Conversation article: Video games affect your moral development but only until you’re 18

Pixel-Shot/Shutterstock, Author provided

Sarah Hodge, Bournemouth University

Young people have probably spent much more of their time than usual playing video games over the last few months thanks to the coronvirus pandemic. One report from telecoms firm Verizon said online gaming use went up 75% in the first week of lockdown in the US.

What impact might this have on young people’s development? One area that people are often concerned about is the effect of video games, particularly violent ones, on moral reasoning. My colleagues and I recently published research that suggested games have no significant effect on the moral development of university-age students but can affect younger adolescents. This supports the use of an age-rating system for video game purchases.

Our sense of morality and the way we make moral decisions – our moral reasoning – develop as we grow up and become more aware of life in wider society. For example, our thoughts about right and wrong are initially based on what we think the punishments and/or rewards could be. This then develops into a greater understanding of the role of social factors and circumstances in moral decisions.

There is a long-standing debate around the effects of video games on moral development, particularly in young people, which typically focuses on whether violent content causes aggressive or violent behaviour.

Yet the moral dimension of video games is far more complex than just their representation of violence, as they often require players to make a range of moral choices. For example, players from the game BioShock have to choose whether to kill or rescue a little girl character known as a little sister.

A player with more mature moral reasoning may consider the wider social implications and consequences of this choice rather than just the punishment or rewards meted out by the game. For example, they may consider their own conscience and that they could feel bad about choosing to kill the little girl.

Plastic arcade game gun pointed at screen.
Video games’ affect on moral reasoning goes beyond how violent they are.
Sean Locke Photography/Shutterstock

We surveyed a group of 166 secondary school students aged 11-18 and a group of 135 university students aged 17-27 to assess their gaming habits and the development of their moral reasoning using what’s known as the sociomoral reflection measure . This involved asking participants 11 questions on topics such as the importance of keeping promises, telling the truth, obeying the law and preserving life. The results suggested a stark difference between the two groups.

Among secondary students, we found evidence that playing video games could have an affect on moral development. Whereas female adolescents usually have more developed moral reasoning, in this case we found that males, who were more likely to play video games for longer, actually had higher levels of reasoning. We also found those who played a greater variety of genres of video games also had more developed reasoning.

This suggests that playing video games could actually support moral development. But other factors, including feeling less engaged with and immersed in a game, playing games with more mature content, and specifically playing the games Call of Duty and playing Grand Theft Auto, were linked (albeit weakly) with less developed moral reasoning.

No effect after 18

Overall, the evidence suggested adolescent moral development could be affected in some way by playing video games. However, there was little to no relationship between the university students’ moral reasoning development and video game play. This echoes previous research that found playing violent video games between the ages of 14 and 17 made you more likely to do so in the future, but found no such relationship for 18- to 21-year-olds.

This might be explained by the fact that 18 is the age at which young people in many countries are deemed to have become adult, leading to many changes and new experiences in their lives, such as starting full-time work or higher education. This could help support their moral development such that video games are no longer likely to be influential, or at least that currently available video games are no longer challenging enough to affect people.

The implication is that age rating systems on video games, such as the PEGI and ESRB systems, are important because under-18s appear more susceptible to the moral effects of games. But our research also highlights that it is not just what teenagers play but how they play it that can make a difference. So engaging with games for a wide variety of genres could be as important for encouraging moral development as playing age-appropriate games.

Sarah Hodge, Lecturer in Psychology and Cyberpsychology, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.