Category / the conversation

How farms can help improve the lives of disadvantaged young people

File 20180716 44103 awoywt.jpg?ixlib=rb 1.1

A visiting farmer tends the animals. Future Roots, Author provided

By Dr Sarah Hambidge (Post-Doctoral Researcher), Bournemouth University.

A couple of years ago, I met Adam (not his real name) at a farm in Dorset. Adam was 14 and had been excluded from mainstream education due to behavioural difficulties and a disruptive home life. He had consequently become involved in regular underage drinking and antisocial behaviour. Adam was being exploited and groomed as a drug runner for a London drug gang infiltrating rural areas. He told me that he had been given a knife by gang members and encouraged to use it to protect himself if necessary against rival gangs or local drug dealers.

The farm where I met him is not a normal farm, but a social one, where the therapeutic use of farming practices and animal assisted therapy is used to provide health, social and educational care services for disadvantaged young people that have become disengaged with mainstream education. Stories such as Adam’s are growing increasingly familiar to staff at the farm he attended, who see other vulnerable young people referred to their service.

Learning new skills. Sarah Hambidge

Many of the young people living in rural Britain who are being exploited by these gangs are, like Adam, those who are disengaged with mainstream education and are at risk of becoming, or currently are, NEET (not in education, employment or training). There are 808,000 young people (aged 16-24) in the UK who are NEET.

Being NEET has a long-term impact on a young person’s life, leaving them vulnerable to substance misuse, offending behaviour, physical and mental health problems, academic underachievement and reduced employment. These young people are subsequently regarded as a concern to the police, health, education and social care professionals.

Yet current interventions are failing to reduce the number of young people becoming NEET. These interventions typically focus on providing the young person with vocational education, despite the fact that the most common vocational qualifications in the UK have very little or no relevance to the labour market.

Interventions that offer a restorative approach, with therapeutic support and a focus on learning, however, are acknowledged to be more successful.

Farm animal therapy. Sarah Hambridge

A green future

Earlier this year, the government launched a 25-year environment plan. The plan acknowledged the importance of connecting children and young people to nature through learning, as well as the benefits of a physical, hands-on experience as a pathway to good health and well-being. The government has pledged £10m to support local strategies which use the natural environment and has further committed to a national expansion of social farming by 2022. This will treble the number of available places to 1.3m per year for children and adults in England.

On social farms, health, social or specialist educational care services for vulnerable people are delivered through structured programmes of farming-related activities. Social farming is established in numerous European countries. Norway currently operates 1,100 social farms, compared to 240 in the UK.

Taking a break on the farm. Sarah Hambidge

Young people participate in a variety of seasonal farming-related activities, including animal husbandry, crop and vegetable production and woodland management. Social farming has been found to have a positive impact on physical and mental health along with the opportunity to develop transferable skills, personal development, social inclusion and rehabilitation.

Social farming

When I met Adam, I was in the midst of a research project evaluating whether a year-long farming intervention can prevent disengaged young people from low-socioeconomic backgrounds becoming NEET. Participants typically attend a four-hour session once a week at the farm.

Future roots, the farm I researched, employs a mix of teachers, youth and social workers and therapists. It offers a different model of learning for those struggling in mainstream education. My research demonstrated that the use of the natural environment as a mechanism for change was effective in reducing the risk of becoming NEET.

The young people learn to care for a variety of animals. Sarah Hambidge

The young people I followed displayed a significant reduction in self-reported mental health risks and behavioural regulation difficulties; improved social relationships and coping; improved life and work skills; and re-engagement with learning. All of the young people were in employment or training six months after their time at the social farm finished.

Indeed, the social farm was the only place where Adam said he felt safe. He was able to develop a sense of belonging and trust which enabled him to talk about the difficulties he was experiencing in his life. Without the social farm intervention, staff said that Adam would likely have proceeded to harm himself or others. The farmer refers to the changes seen in the young people as a “chrysalis butterfly effect”: the positive transformation seen in these young people as they turn their lives around to look to the future are truly inspiring.


Dr Sarah Hambidge, Postdoctoral Researcher, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Remembering Srebrenica, more than 20 years on

EPA/Jasmin Brutus

By Dr Melanie Klinkner, Bournemouth University and Giulia Levi, Bournemouth University.

One of the darkest hours in recent human history, the 1995 Srebrenica massacre, has plenty of unpleasant parallels in today’s world, from Syria to Myanmar. 23 years after the massacre in and around the Bosnian enclave of Srebrenica, remembrance of what has been described as “scenes from hell, written on the darkest pages of human history” is as important as ever.

The events in and around Srebrenica between July 10-19 1995 are well known. In those few days, an estimated 8,000 Muslim Bosniaks were murdered by Bosnian Serb forces. Efforts to find, recover, identify and repatriate the victims’ remains are ongoing – and the task is a hugely complex one.

Every year at the Srebrenica-Potočari Memorial Centre and Cemetery, more victims are laid to rest. This year, 35 people have been identified and will be buried. Of the 430 Srebrenica-related sites where human remains have been recovered, 94 are graves and 336 are surface sites with human remains scattered on the ground. Pathologists and anthropologists examined more than 17,000 sets of human remains related to Srebrenica, resulting in around 7,000 identifications, most of them via DNA. To gather enough DNA to make those identifications, more than 20,000 DNA samples had to be collected.

Slow justice

It was only in autumn 2017 that Ratko Mladić, a former general of the Bosnian Serb forces, was convicted of the crimes that took place in Srebrenica – genocide and persecution, extermination, murder, and the inhumane act of forcible transfer. Mladić is one of relatively few defendants to have appeared before the International Criminal Tribunal for the Former Yugoslavia (ICTY) charged with genocide.

This is because for a conviction on the grounds of genocide, the prosecution has to prove a catalogue of things. To be convicted of the crime of genocide, the accused must have deliberately intended “to destroy, in whole or in part, a national, ethnic, racial or religious group as such”. Punishable under Article 4(3) of the ICTY Statute are also conspiracy to commit genocide, incitement to commit genocide, attempts to commit genocide and complicity in genocide. Two things have to be proven: the actus reus (the actual killings, serious bodily or mental harm and deliberate infliction of conditions designed to bring about the destruction of the group) and the mens rea (the specific intent to destroy the group).

Mladić’s 2017 conviction did not bring an end to all aspects of his case. In March 2018, both the defence and prosecution filed their notices of appeal. Though not in relation to Srebrenica, the prosecution submits that the trial chamber erred in two of its findings: first, that Bosnian Muslims in the areas of Foča, Kotor Varoš, Prijedor, Sanski Most and Vlasenica did not constitute a substantial part of the Bosnian Muslims of Bosnia and Herzegovina, and second, that Mladić (and others) did not intend to destroy those Bosnian Muslims. As a result, the proceedings are ongoing.

Bosnian Muslims carry coffins with remains of Srebrenica victims, 2017. EPA/Jasmin Brutus

During the 530 days of Mladić’s original trial, 377 witnesses appeared in court, some of them victims of war crimes. Victims often have many needs: to tell their stories, to contribute to public knowledge and accountability, to publicly denounce the wrongs that were committed against them and others, to bear witness on behalf of those who did not survive, and to receive reparations, public acknowledgement or apologies. They may wish to confront the accused, to find out the truth about what happened to their loved ones, to contribute to peace goals or to help prevent the perpetration of further abuse. Many risk their own personal safety to tell their stories, or those of victims who did not survive.

And yet, a recent report by international NGO Impunity Watch paints a bleak picture stating that “Western Balkan states have done very poorly when it comes to victim participation in [transitional justice] processes. Victims’ voices are marginalised and their rightful claims have been politicised by the different sides.”

Remembrance and responsibility

Impunity Watch describes a continuing “battleground of conflicting narratives, in which each side claims victimhood and blames the other for past abuses”. This does not bode well for the future.

The divisions in Bosnia are hard to ignore; Srebrenica’s Serb mayor, Mladen Grujičić, denies that the genocide occurred, as does Milorad Dodik the leader of Bosnia’s Serb-led entity Republika Srpska. Many Serbian nationalists regard Mladić as a war hero. To many people, his conviction would therefore be effectively meaningless.

And yet, plenty of civil society activities, interventions and educational programmes have been devised. In Bosnia, Youth United in Peace and Youth Initiative for Human Rights, to name but two, offer young people the chance to hear different perspectives about the past through workshops and visits to commemorative places of all sides. Such projects try to counter ethnic segregation to offer shared space for dialogue.

In a speech to the United Nations in 1958, Eleanor Roosevelt famously said:

Where, after all, do universal human rights begin? In small places, close to home – so close and so small that they cannot be seen on any maps of the world. Yet they are the world of the individual person; the neighbourhood he lives in; the school or college he attends; the factory, farm, or office where he works.

Such are the places where every man, woman, and child seeks equal justice, equal opportunity, equal dignity without discrimination. Unless these rights have meaning there, they have little meaning anywhere. Without concerted citizen action to uphold them close to home, we shall look in vain for progress in the larger world.

All too often this is forgotten. But with stark societal divisions palpable in many parts of the world, we have to keep reminding ourselves that all others are above all else human beings. Only if we do that will the idea of human rights be meaningful.


Dr Melanie Klinkner, Principal Academic in International Law, Bournemouth University and Giulia Levi, PhD Candidate, Faculty of Health and Social Sciences, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Grand Challenges – Clean Growth and Future of Mobility

There are several initiatives to develop state of the art low carbon energy technologies to capture, generate and store energy from renewable sources. Non-renewable sources of energy especially those derived from fossil fuels are finite, and have been contributing to greenhouse gases. In turn ozone depletion and global warming are on the rise.

There have been recent developments in terms of tidal, wind, solar PV and solar thermal technologies, however there are still challenges in terms of efficiency and amount of useful energy that can be generated versus global demands. In addition, dependency on rare earth materials still exists, thermal efficiency of thermo-fluids (fluids used as a medium of heat energy transfer) have upper thresholds and have implications on the durability of systems. Costs of conventional energy materials such as cobalt and lithium carbonates have been rising sharply since 2015-16. Thermal instability of lithium ion batteries and issues are still significant.

At BU NanoCorr, Energy & Modelling (NCEM) Research Group we are developing novel solar thermal (low carbon) technologies incorporating nano enhanced thermofluids and storage materials.

Research and development in low carbon technology at BU is focused on two main themes; clean growth and future of mobility. For further details and to take part in discussion by providing your comments please click on the link (it takes less than a minute to register).

World Cup online betting is the highest it’s ever been

File 20180626 112598 8wcp1b.jpg?ixlib=rb 1.1

The 2018 World Cup inspires new gamblers. Shutterstock

By Dr Raian Ali, Bournemouth University; Dr Emily Arden-Close, Bournemouth University; Dr John McAlaney, Bournemouth University, and Keith Phalp, Bournemouth University.

Sports betting is worth up to £625 billion per year, with 70% of that trade reckoned to come from football. During big sporting competitions, such as the World Cup, even more money is spent gambling than usual. Over the 2018 World Cup, bookmakers are estimated to make a profit of US$36.4 billion (£41.3 billion). And in the UK, the amount of money spent on gambling during the World Cup is expected to more than double from £1 billion in 2014 to £2.5 billion this year.

Sports gambling is being driven by the unlimited availability of online betting and the fact that no physical money is exchanged, making financial transactions seem less real. The vast amount of data that online gambling sites collect also enables them to personalise offers to individual gamblers. Instead, this data should be used to help people gamble responsibly by warning users in real-time that they are exhibiting problematic gambling behaviours.

For many people, gambling isn’t just a fun novelty every four years. About 430,000 citizens in the UK can be identified as problem gamblers. These individuals have lost hundreds of thousands of pounds online, which has impacted not only the gamblers but also their families.

High profile but infrequent betting events such as the Word Cup exacerbate the issues that problem gamblers face. Seeing others engage in betting, coupled with the advertisements from betting firms, leads problem gamblers to attempt to convince themselves that they do not have a problem. Environmental cues can also trigger the urge to gamble in those who have a gambling problem. So, the intensive advertising used by betting firms during the World Cup, along with media coverage of the World Cup in general, may further push problem gamblers towards making harmful decisions.

Watching your habit

Online gambling sites have an infinite memory for bets – when made, for how much, regarding what, and so on. This data is a rich source that websites use for tailoring offers and marketing material to fit a gambler’s potential interests. But this personalisation exploits cognitive biases in gamblers and encourages them to increase risk-taking and by extension, gambling.

There is only a fine line between the legitimate marketing and personalisation of content and offers on the one hand and exploitation and manipulation on the other. For example, the tracking of a gambler’s betting pattern means the gambler can be targeted with offers following heavy losses, encouraging them to chase losses even further.

But this same data could also be used to support reductions in problem gambling, either led by gamblers themselves or with the support of a counsellor or software. Such transparency could enhance the image of the gambling industry and make responsible gambling a shared responsibility between gamblers and bookmakers.

A chance for change

In our EROGamb project, funded by GambleAware and Bournemouth University, we advocate a policy change where gambling sites provide gambling behavioural data to gamblers and their surrogates in real-time.

This data would provide an unprecedented opportunity to tackle problem gambling. For example, the data could lead to the app informing gamblers that they are exhibiting problematic gambling patterns. The real-time collection of information such as “the gambler has reached the monthly spending limit” could trigger a message visualising their past betting behaviour and a reminder of a commitment already made.




Read more:
Fixed-odds betting terminal cap must be just the start of gambling regulation


In our studies, digital addicts, including online gambling addicts, have indicated that having access to such data would act as a wake-up call, raising awareness. Digital media users, in general, like to be in control of their usage through labels and awareness tools.

Similar facilities have started to exist in mainstream digital media. For example, on Google, it is now possible to download your data and on Facebook to download your profile data history of interaction, but not currently as real-time streaming of data as actions happen.

How to retrieve and use gambling-related data for being more in-control of gambling behaviour.
The EROGamb Project

Challenges

We understand the barriers to implementing this vision. Gambling operators may not have such data readily available and may even rely on third parties to offer certain games. Some also fear that gamblers might share the data with competitor gambling sites, giving away information about marketing practices. But the General Data Protection Regulation(GDPR) right to data portability holds that gamblers shall not be prevented from accessing and sharing their data.

Given the advantages, and also the increased demand for transparency, this would eventually become the recommended practice for demonstrating advanced corporate social responsibility and inspiring the trust of the public and clients in the gambling industry. We are preparing a charter for the gambling industry towards a commitment for that.

The rise of online gambling, combined with the record amount of money being spent on gambling at this year’s World Cup makes this the perfect time to discuss what we can do to prevent and combat gambling addiction. Simply by using data to help people be better aware of their gambling habits, rather than hooking them back into their next bet, gambling sites could make a massive difference.


More evidence-based articles related to the World Cup:

This article was originally published on The Conversation. Read the original article.

Why this football tournament should be called the men’s World Cup

By Dr Jayne Caudwell, Bournemouth University

The globalisation of football means it can now be found in most parts of the world. It is celebrated as the national sport in many countries. But, we forget that “football” actually means “men’s football”. It’s the same with other popular sports – our habit is to refer to basketball and women’s basketball, cricket and women’s cricket, ice hockey and women’s ice hockey. This naming places men’s football as the dominant universal and natural norm, while women’s football becomes the “other” version.

If we want a level football playing field, then “football” should be redefined by changing our reference to tournaments, championships and leagues to “men’s football” if that is what is being played. It’s time we started referring to the men’s football World Cup, just as we refer to the women’s football World Cup.

Women and girls have long been treated as second-class citizens in the many worlds of football, including playing, officiating, governing and spectating. And indeed, in the build up to the 2018 men’s World Cup, there was much discussion about racism and homophobia – but practically none about football, gender, sexism and misogyny.

The histories of the development of football in most countries around the world show that women and girls have been denied access to pitches, equipment, coaches, training, stadiums and financial support. These material opportunities are important because they enable and validate participation – and full football citizenship.

Finland takes on Austria in a qualifier for the 2019 Women’s World Cup. EPA

Media sport pages cover men’s sport. During the football season, the coverage is dominated by stories of men’s football. Women footballers seem to not exist. The sport press obliterates them.

But women and girls are playing, officiating, spectating and commentating on the game in ever increasing numbers around the world. The England women’s team outperforms the men’s team on the European and world stage. They are currently ranked ten places higher, in second position. And yet, the gender pay gap in football is atrocious.

Ignoring sexism

While Russia, as host of the men’s football World Cup 2018, has been criticised for its poor record in dealing with homophobic and racist abuse, nothing has been said about gender-based abuse or discrimination.

Instead, ahead of the men’s World Cup, Russian MPs have been arguing over whether Russian women should or should not have sex with visiting (presumably male) football fans. The UK Foreign Office released advice on race and LGBT concerns, but there’s nothing on how sexist chanting can make men’s football a hostile environment for women. You only need to look at the sexism experienced by doctor Eva Carneiro and assistant referee Helen Byrne in the men’s premier league to see how this plays out.

What’s more, many of the concerns about homophobia and racism at the men’s World Cup stem from wider cultural issues in Russia. The same problems are evident with sexism and misogyny, yet they are curiously absent from the discussion when it comes to football. Cultural problems that affect men extend into the sporting arena, but not those that affect women.

In 2017, the Russian parliament passed legislation loosening laws on domestic violence. Russian women who support the #MeToo movement have come up against draconian assembly laws that say only one person is permitted to make a public protest.

There are no campaigns in international men’s football that aim to stop sexism, or call for anti-sexism and an end to gender-based violence.

Meanwhile, the women and girls who have fought hard to play football often encounter negative responses from the general public and from the media. Sport sociologists have found that sportswomen are trivialised, sexualised and experience symbolic annihilation – they simply don’t exist in images of the sport. A recent poster depicting Iranian fans is a prime example. Not a single female face features.

Women’s and girls’ sporting achievements are reduced as a result of ridicule. Their bodies are considered sexual objects rather than for playing sport. Former FIFA president Sepp Blatter’s comment that women should play in tighter shorts to attract more fans to the game is a classic example of this. More recently, feminist author Laura Bates challenged FIFA for describing player Alex Morgan as “easy on the eye and good looks to match” as well as the FA for tweeting about “lionesses go back to being mothers, partners and daughters” after playing in the women’s World Cup.

It’s easy to imagine that this men’s World Cup in Russia will continue to disregard gender, sexism and misogyny. And yet, sport, specifically football, has potential to incite change, and reform.

Renaming to men’s football is an easy and simple step in the direction towards equality. We may as well start with the men’s World Cup 2018.


Jayne Caudwell, Associate Professor Leisure Cultures, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Victorian pleasure piers are unique to Britain, but they are under threat

File 20180618 85819 1086tff.jpg?ixlib=rb 1.1

Edmond Holland/Shutterstock.com

 

By Dr Anya Chapman, Bournemouth University.

A stroll along a pier remains the most popular activity for visitors to the British seaside, with 70% of them enjoying a walk over the waves.

For many, the seaside pier is perhaps the most iconic symbol of the British seaside holiday and the epitome of excursions to the coast. Piers have always provided holidaymakers with entertainment, from the grand pavilions and theatres of the Victorian era, to the amusement arcades of the 1980s. For two centuries, piers have been the place to see and be seen at the seaside.

Victorian pleasure piers are unique to the UK, but they are under threat: in the early 20th century nearly 100 piers graced the UK coastline, but almost half of of these have now gone.

By their very nature, seaside piers are risky structures. When piers were constructed, British seaside resorts were at the height of their popularity. The Victorians wanted to demonstrate engineering prowess and their ability to master the force of the sea. Some lasted longer than others, with Aldeburgh pier in Suffolk lasting just less than a decade before it was swept away by a drifting vessel. At the other end of the spectrum is the Isle of Wight’s Ryde pier, which at over 200 years is the oldest pleasure pier in the UK.

Yet the longevity of such piers presents them with new risks: fire, maintenance issues, rising costs, and climate change. Piers face an uncertain future. The National Piers Society estimates that 20% of today’s piers are at risk of being lost.

Piers at risk

Over the last 40 years, many notable piers have succumbed to time and tide. Perhaps the most iconic of these losses is Brighton West Pier, which has suffered multiple storms and fires since closure in 1975, leaving an isolated skeleton as a haunting reminder. Now there is growing recognition that seaside piers are vital to coastal communities in terms of resort identity, heritage, employment, community pride, and tourism. In fact, the UK government now offers funding to enable the revival of piers and other seaside heritage.

Brighton West Pier. National Piers Society

Despite the sea change in the perceived importance of seaside piers, many remain derelict and in a state of decay. One such pier is Weston-Super-Mare’s Birnbeck Pier, on the west coast, which has been closed for over three decades. Birnbeck Pier is unusual in that it is the only pier which links to an island, but as time has passed, parts of the structure have crumbled into the sea. Despite the endeavours of the local community and groups such as The Birnbeck Regeneration Trust, the owner of the pier refuses to sell or regenerate the pier.

This is in stark contrast to nearby Clevedon Pier, which was deemed “the most beautiful pier in England” by the poet Sir John Betjeman. After partial collapse and subsequent closure of the pier in 1970 there were calls for its demolition. Clevedon Pier was saved and reopened in 1998, and is now the UK’s only Grade I listed seaside pier. Today it stands as a testament to The Clevedon Pier Heritage Trust who continue to develop the pier with a new visitor centre, wedding venue, and conferencing space. Recently, the pier gained a new group of fans as it featured as a backdrop to a One Direction music video.

Thriving piers

Despite their advancing years, since the turn of the 21st century many piers have found a new lease of life. The high-profile regeneration of Hastings Pier, led by a local community trust and backed by Heritage Lottery Funding, has spearheaded the revitalisation of many seaside piers (although the pier, controversially, was recently sold to a commercial investor). Nevertheless, a number of coastal communities have successfully regenerated their piers through the formation of pier trusts, including those at Swanage and Herne Bay. Other seaside towns are being even more ambitious and hoping to rebuild their piers or to build brand new piers.

Swanage Pier. National Piers Society

Local authorities within seaside resorts are also promoting their piers as flagship tourist attractions and investing in their refurbishment and new facilities. Southport Pier, which narrowly escaped demolition during the 1990s, is now at the heart of the resort’s development strategy and is currently undergoing a £2.9m refurbishment which includes the addition of new catering and retail facilities.

The piers that are thriving in the 21st century are those that provide a unique selling point. Bournemouth Pier now features the only pier-to-beach zip line, and its former theatre now houses adrenaline-packed activities such as climbing walls, an aerial assault course, and a vertical drop slide. In Folkestone, the Harbour Arm, which was redeveloped as a pleasure pier in 2016, provides a range of pop-up bars and restaurants and its very own champagne bar. Weston’s Grand Pier offers family fun with a modern twist and even boasts an indoor suspended go-kart track. Southwold Pier boasts a novelty automaton arcade.

Weston-Super Mare Grand Pier. National Piers Society

By staying tuned to modern desires as well as a sense of nostalgia, piers will continue to adapt to changing tastes and provide entertainment and pleasure for seaside visitors.

But perhaps the biggest threat they face today is climate change, and the attendant rising sea levels and increasingly frequent storm surges. Cromer, Saltburn, and Blackpool North Pier have all recently been significantly damaged by storms. The World Monuments Fund has recognised the threat of extreme weather events to seaside piers by adding Blackpool’s three piers to their 2018 Watch List. With seaside piers regaining their popularity, their next big challenge will literally be finding a way to weather the storm.


Anya Chapman, Senior Lecturer in Tourism Management, Bournemouth University

This article was originally published on The Conversation. Read the original article.

The secret information hidden in your hair

File 20180613 32313 ub0kww.jpg?ixlib=rb 1.1

Shutterstock

 

By Dr Richard Paul, Bournemouth University.

Your hair can say a lot about you. It doesn’t just give people clues about your personality or your taste in music. It can also record evidence of how much you drink, whether you smoke or take drugs, and perhaps even how stressed you are. My colleagues and I research how hair can be used to provide more accurate testing for these attributes. And a recent court case shows how far the technology has come.

In 2008, a mother who had been struggling with alcohol abuse was asked by a UK court judging a child custody case to abstain from drinking for one year. To assess whether she managed to do this, scientists used a hair analysis that can detect long-term drug or alcohol abuse (or abstinence) over a period of many months, from just one test.

This case turned out to be a landmark moment for toxicological hair analysis. The labs analysing the mother’s hair suggested that she may have been drinking during the time she was supposed to be abstinent. The case ended up in the High Court, where the scientific principles underlying hair testing and, crucially, the way the results are reported were thoroughly debated. The judge was critical of the interpretation of the hair analysis data and disagreed with the scientists, ruling that there was no evidence to support drinking during the defined time-period.

Fast forward to 2017 and hair analysis featured in the High Court again. Yet this time the reliability of hair testing was confirmed. A lot changed in the intervening years between these cases. Technology advanced but, importantly, so did our understanding of what hair analysis data actually means.

The traditional samples for drug and alcohol testing are blood and urine. These provide evidence for cases where we require an indication of exposure to drugs and alcohol in a very recent time frame. These samples have what is referred to as a “window of detection”. This is a timeframe over which that sample can demonstrate exposure to drugs or alcohol. The window of detection for blood is often measured in hours, and urine can show evidence over a few days, possibly a few weeks.

By contrast, hair can show a retrospective history of your drug or alcohol consumption (or abstinence) over many months. This level of information makes hair testing invaluable in a wide variety of legal scenarios. If you need to screen potential employees for a safety-critical role, you can use a hair test to check they are not regular drug users. What if you’re concerned your drink was spiked at a party, but too much time has passed for any drug to still be found in your blood or urine? The drugs can remain trapped in your hair, which gives you a longer window of detection and allows scientists to find traces of the drug long after the actual crime event.

Ready for my close up. Shutterstock

My research group is investigating factors that affect the hair concentration of certain chemicals produced when the body ​processes alcohol (metabolites). This sort of work is important to give confidence to the results of hair testing when presented in court. We need the utmost confidence in the data, when a court judgment may have life-changing consequences.

We recently showed that hair sprays and waxes can greatly increase the level of alcohol metabolites found in hair, giving a false positive result in an alcohol test. In one of our experiments, a volunteer who was strictly teetotal tested negative for fatty acid ethyl esters (metabolites of alcohol) in head hair untreated with hair spray, but tested positive after application of hair spray. Not just a little positive either. The volunteer tested significantly over the threshold for chronic excessive alcohol consumption after using hair spray.

This may sound alarming for a test that is used in court, but now that scientists are aware of these limitations, procedures can be put in place to mitigate against them and guidance can be updated. Ethyl glucuronide (a different alcohol metabolite) is not affected by hair sprays and waxes and so is a better target to test when someone uses cosmetic products.

Other ways of testing

Hair is not the only alternative to blood and urine testing. I’m currently investigating whether fingernails might be a better sample to test in cases where we need to prove abstinence from alcohol. It has been shown that fingernails may incorporate significantly more ethyl glucuronide (an alcohol metabolite) than hair samples. This means fingernails may be more sensitive than hair and could be better at distinguishing low levels of drinking and complete abstinence.

Toxicological hair analysis is not about catching criminals. It’s not about penalty or punishment. It’s about helping people. Results from hair testing can help support people struggling with addiction. In the future I hope we will also be using hair analysis as a diagnostic tool in healthcare.

The research I’m conducting at the moment is evaluating the potential for hair to be used as a diagnostic marker of chronic stress. Stress can lead to very serious healthcare issues. We are examining the stress hormone cortisol to see if we can identify people at risk from future healthcare issues from the concentration of this hormone in hair.

If successful, this work will take hair analysis into a new realm. I’d like to see a future where hair testing is used for a national screening programme for older adults who are most at risk from chronic stress. This could allow scientists to target interventions to lower stress at people who need them the most, which could significantly improve the health and well-being of older people in particular.


Richard Paul, Principal Academic in Biological Chemistry, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Digital addiction: how technology keeps us hooked

File 20180608 191974 6cyo5u.jpg?ixlib=rb 1.1

There are a number of reasons why you can’t get away from your screen. shutterstock

By Dr Raian Ali, Bournemouth University; Dr Emily Arden-Close, Bournemouth University, and Dr John McAlaney, Bournemouth University

The World Health Organisation is to include “gaming disorder”, the inability to stop gaming, into the International Classification of Diseases. By doing so, the WHO is recognising the serious and growing problem of digital addiction. The problem has also been acknowledged by Google, which recently announced that it will begin focusing on “Digital Well-being”.

Although there is a growing recognition of the problem, users are still not aware of exactly how digital technology is designed to facilitate addiction. We’re part of a research team that focuses on digital addiction and here are some of the techniques and mechanisms that digital media use to keep you hooked.

Compulsive checking

Digital technologies, such as social networks, online shopping, and games, use a set of persuasive and motivational techniques to keep users returning. These include “scarcity” (a snap or status is only temporarily available, encouraging you to get online quickly); “social proof” (20,000 users retweeted an article so you should go online and read it); “personalisation” (your news feed is designed to filter and display news based on your interest); and “reciprocity” (invite more friends to get extra points, and once your friends are part of the network it becomes much more difficult for you or them to leave).

Some digital platforms use features normally associated with slot machines. Antoine Taveneaux/Wikimedia, CC BY

Technology is designed to utilise the basic human need to feel a sense of belonging and connection with others. So, a fear of missing out, commonly known as FoMO, is at the heart of many features of social media design.

Groups and forums in social media promote active participation. Notifications and “presence features” keep people notified of each others’ availability and activities in real-time so that some start to become compulsive checkers. This includes “two ticks” on instant messaging tools, such as Whatsapp. Users can see whether their message has been delivered and read. This creates pressure on each person to respond quickly to the other.

The concepts of reward and infotainment, material which is both entertaining and informative, are also crucial for “addictive” designs. In social networks, it is said that “no news is not good news”. So, their design strives always to provide content and prevent disappointment. The seconds of anticipation for the “pull to refresh” mechanism on smartphone apps, such as Twitter, is similar to pulling the lever of a slot machine and waiting for the win.

Most of the features mentioned above have roots in our non-tech world. Social networking sites have not created any new or fundamentally different styles of interaction between humans. Instead they have vastly amplified the speed and ease with which these interactions can occur, taking them to a higher speed, and scale.

Addiction and awareness

People using digital media do exhibit symptoms of behavioural addiction. These include salience, conflict, and mood modification when they check their online profiles regularly. Often people feel the need to engage with digital devices even if it is inappropriate or dangerous for them to do so. If disconnected or unable to interact as desired, they become preoccupied with missing opportunities to engage with their online social networks.

According to the UK’s communications regulator Ofcom, 15m UK internet users (around 34% of all internet users) have tried a “digital detox”. After being offline, 33% of participants reported feeling an increase in productivity, 27% felt a sense of liberation, and 25% enjoyed life more. But the report also highlighted that 16% of participants experienced the fear of missing out, 15% felt lost and 14% “cut-off”. These figures suggest that people want to spend less time online, but they may need help to do so.

Gaming disorder is to be recognised by the WHO.

At the moment, tools that enable people to be in control of their online experience, presence and online interaction remain very primitive. There seem to be unwritten expectations for users to adhere to social norms of cyberspace once they accept participation.

But unlike other mediums for addiction, such as alcohol, technology can play a role in making its usage more informed and conscious. It is possible to detect whether someone is using a phone or social network in an anxious, uncontrolled manner. Similar to online gambling, users should have available help if they wish. This could be a self-exclusion and lock-out scheme. Users can allow software to alert them when their usage pattern indicates risk.

The borderline between software which is legitimately immersive and software which can be seen as “exploitation-ware” remains an open question. Transparency of digital persuasion design and education about critical digital literacy could be potential solutions.


Raian Ali, Associate Professor in Computing and Informatics, Bournemouth University; Emily Arden-Close, Senior Lecturer in Psychology, Bournemouth University, and John McAlaney, Principal Academic in Psychology, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Autism screening tool may not pick up women with the condition

File 20180514 100687 146y95f.jpg?ixlib=rb 1.1

Nikodash/Shutterstock.com

By Rachel Moseley, Bournemouth University and Julie Kirkby, Bournemouth University

Diagnosing autism is expensive and time consuming, so a screening tool is used to filter out those people who are unlikely to be diagnosed as autistic. This is all well and good, but our latest research suggests that a widely used screening tool may be biased towards diagnosing more men than women.

Earlier studies have cast doubt on the ability of one of the leading screening tools, called Autism-Spectrum Quotient, to accurately identify people with autism. Our study decided to look at another screening tool that hasn’t yet been investigated: the Ritvo Autism Asperger Diagnostic Scale-Revised (RAADS-R), a widely used questionnaire for assessing autism in adults with average or above average intelligence.

We compiled the RAADS-R scores of over 200 people who had a formal diagnosis of autism. We compared scores between autistic men and autistic women on four different symptom areas: difficulties with social relationships, difficulties with language, unusual sensory experiences or motor problems, and “circumscribed interests” (a tendency to have very strong, fixed interests).

As there are known sex differences in these areas – for example, with women being better at hiding social and communicative difficulties, and men being more likely to show obvious, and hence easier to detect, circumscribed interests – we wanted to know whether RAADS-R was able to pick up these differences.




Read more:
Changing the face of autism: here come the girls


Our analysis showed that it didn’t: we found no sex differences in RAADS-R scores between autistic men and women in social relatedness, language and circumscribed interests.

A possible explanation for this result is that, since RAADS-R depends on people accurately judging and reporting their own symptoms, sex differences may only emerge when behaviour is diagnosed by an experienced clinician. Previous studies have shown that autistic people often lack insight into their own behaviour and find it difficult to report their own symptoms.

Another likely reason for finding no sex difference in autism traits is that this and most other studies only include autistic people who have received a formal diagnosis through assessment with the very tools and tests we are investigating. As diagnostic and screening tools (including RAADS-R) were developed with male samples, they are most likely to identify autistic women with the most male-like profiles.

This might explain why fewer women tend to be diagnosed. It could be, then, that the screening tests filter out all of the autistic women with more female-like autism traits, and the autistic women with more male-like traits go on to be diagnosed. Or it could be that the underlying sample is biased because the formal diagnostic tools select people with more male-like traits, and the screening tool merely reflects this underlying bias.

Our results could show that our sample didn’t represent a diverse range of autistic women, then. And this is a problem that affects all research on sex differences in autism.




Read more:
GPs urgently need training on autism


As more males than females have received a diagnosis of autism, many of the theories we have about autism are based on these diagnosed cases, and, as a result, may only apply to males. Likewise, as we base our screening tools and diagnostic tools on males who have been diagnosed, we may only pick up women who show male-like symptoms.

We could be missing the women who have very different, more female presentations of autism, but who still show the core features that are central to the diagnosis. These include problems with social interaction, communication and restricted behaviour and interests.

Because screening and diagnostic tests focus on the most common, male manifestations of these core symptoms, females tend to be overlooked. Circumscribed interests in males, for example, are more likely to be based on unusual topics, whereas girls and women may centre their interests on things like celebrities or fashion, only the intensity of the interest sets them apart from non-autistic females.

One clear difference

There was only one prominent sex difference that emerged in our study: autistic women reported more sensory differences and motor problems than autistic men. Sensory and motor symptoms are common in autism. People may be over or under sensitive to sights, sounds, touches, smells and tastes, and are often clumsy and poorly coordinated.

Some autistic people are sensitive to certain fabrics. Purino/Shutterstock.com

This self-reported finding, that women have more sensory and motor symptoms than men, needs to be investigated more thoroughly. However, it appears to be consistent with a few studies that have found that autistic women do have more sensory and motor symptoms than men.

If these types of symptoms are especially problematic for autistic women, they could be important for providing a diagnosis. Although RAADS-R measures sensory and motor symptoms, they play a very minor role in gold-standard diagnostic tests, such as the Autism Diagnostic Observation Schedule.

The importance of a diagnosis?

Efforts are now underway to develop screening tools that are better at identifying autism in females.

Diagnosis is important for autistic people for many reasons. For example, it is the only way they can access support services, such as dedicated support workers to help them with activities at home or in daily life. They might also receive financial support if they need it. (Unemployment affects most of the autistic population and may in part be due to high levels of mental illness in this group.)

Other people have spoken about how having a diagnosis has helped them understand the struggles they’ve faced in their lives – that these things weren’t their fault. And it has helped them meet other people who accept them for who they are.


Rachel Moseley, Senior Lecturer in Psychology, Bournemouth University and Julie Kirkby, Senior Lecturer in Psychology, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Small charities face bankruptcy for not complying with GDPR, but put clients at risk if they do

File 20180521 14974 187apcf.jpg?ixlib=rb 1.1

The way charities use and hold data on behalf of their clients and donors creates problems under GDPR. Tashatuvango/Shutterstock

By Dr Shamal Faily, Bournemouth University

You will no doubt have received the emails yourself: don’t forget to opt in, click here to stay in touch, we don’t want to lose you. The General Data Protection Regulation, or GDPR, comes into force on May 25, and organisations and businesses large and small are racing to ensure the way they collect, store and use the personal data of their customers and clients meets the new, higher standards of this new European Union privacy law.

Compliance with GDPR can be costly, requiring organisations to analyse the way they work, the data they use, how it is handled and secured. Documenting how personal data is held and processed is tedious and time consuming, as is developing procedures for dealing with individuals’ requests to see the data held on them, security breaches that involve loss of data, or assessing the privacy impact of some new product or service.

To data protection authorities across the European Union, such as the UK Information Commissioner’s Office (ICO), this is just good practice – the cost of doing business in a free and open market. But what if yours is a non-profit organisation? Several UK charities have been fined for breaking existing data protection laws. Many others are acutely aware that a single penalty for non-compliance could put them out of business.

The ICO has produced guidance for charities, and reading it you might think that the challenges charities face are the same as those facing any small business. Both have limited resources, time and money to spend on ensuring compliance. Losing or misusing personal data leads to the erosion of trust, irrespective of whether those affected are paying customers or charity donors. But scratch beneath the surface and you can see how GDPR causes unique problems for small charities, particularly those that work to help society’s most vulnerable.

Duty of care

The new privacy regulations require that personal data is “processed in a manner that ensures appropriate security of the personal data”. Any security expert will tell you that perfect security is impossible, so businesses can meet this requirement by investing in security considered “good enough” to meet the duty of care to their clients and customers.

But for charities, the duty of care they have for both their vulnerable client base and their donors is so strong that a culture of cost-cutting has formed. Because charities lack the expertise to understand the risks they face, they may wrongly believe they are avoiding risks, or accept risks without understanding the implications. Ultimately, this works against charities investing in the security they actually need. A report commissioned by the UK Department for Culture Media and Sport in 2017 found this culture even led to some charities intentionally relying on out-of-date or low technology solutions. In one case, a charity was even prepared to accept the risk of damaging data losses, in the hope that their donors would be sympathetic and appreciate that, to them, cybersecurity is a luxury they cannot afford.




Read more:
GDPR comes with teeth – here are the winners and losers


Charities care for others, but are not always able to care for their data. perfectlab/Shutterstock

Ethical tensions

The new privacy regulations are built around fair treatment, but this also fails to appreciate the ethical tensions faced by charities. Under GDPR, organisations can only collect data from individuals when they have a legal basis for doing so, for example that the individual has given their consent (such as signing up for an email newsletter), or that the organisation must do so in order to comply with a legal obligation (such as banking information required to meet money laundering regulations). However, complications arise because while an individual may give consent, they may also withdraw it.

Imagine, for example, that Bob suffers from a drug addiction. In a moment of clarity, he checks into a rehab centre for help, and gives consent for the centre to collect what personal data they require. But Bob later relapses, and – to keep this information from his family – withdraws his consent and exercises his right to be forgotten, demanding that the rehab centre deletes the data on him that it holds.

The GDPR provides some discretion for processing personal data in matters of life and death, but not if Bob is capable of giving consent. And so the rehab centre faces a dilemma: it can assert Bob isn’t capable, exposing themselves to the risk of a fine should he report them to the ICO. Alternatively, they can comply and expose Bob to future risks that may threaten his health or life, and reduce or remove the information they know that might one day help save his life.

ICO guidance for not-for-profits should answer the sorts of questions regularly raised by charities. But instead it treats small charities like any other small business. The ICO claims the is information that charities want, but it is not the information they need. If guidance fails to acknowledge the risks to small charities, what incentive do charities have to invest time and money following it?

What charities need are less platitudes on what they should be doing – they already know this – and more advice on how to do it, given the very particular challenges they face. In a speech given to the charities attending the Funding and Regulatory Compliance conference last year, the information commissioner said that getting privacy right can be done, that it should be done, and she would say how it can be done. Yet as the deadline looms, charities are still waiting to hear about the “how”.


Shamal Faily, Senior Lecturer in Systems Security Engineering, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Tessa Jowell’s farsighted vision for media literacy was ahead of its time

File 20180516 155569 1h92h9o.jpg?ixlib=rb 1.1

Forward thinker: Tessa Jowell in 2007. More Than Gold UK, CC BY-NC

By Dr Richard Wallis, Bournemouth University

The untimely death from cancer of former UK Labour cabinet minister, Dame Tessa Jowell, has triggered a wave of tributes from across the political spectrum. Her vision for securing the 2012 Olympics for London, her formative role in New Labour’s flagship Sure Start scheme, and most recently, her campaign for cancer research, have all been given many column inches.

By contrast, Jowell’s less certain legacy as principal advocate for media literacy is barely given a mention. It seems to have been quietly forgotten that it was Jowell, as secretary of state for Culture, Media and Sport, that pushed through parliament the Communications Act 2003 which enshrined media literacy in law, and gave to Ofcom – the (then new) media “super-regulator” – the responsibility to “promote” the idea.

Media literacy existed as a New Labour policy well before Jowell’s turn at the Department for Culture Media and Sport (DCMS). Her predecessor, Chris Smith, believed that the concept was a useful one for “arming the citizen-consumer” of media, to make responsible choices in a period of increasing deregulation.

To the dismay of some of her own policy advisors, Jowell seized the concept, made it her own, and became a fervent advocate at every opportunity. In an address given at BAFTA the year following the Communications Act, she referred to media literacy as “a coming subject” and one that “in five years’ time will be just another given”.

Misplaced optimism

With the benefit of hindsight, Jowell’s optimism seems to have been misplaced. Media literacy, arguably, has never been lower on the political agenda. The plethora of initiatives that sprang up in the wake of the Communications Act have largely withered on the vine – and the process of recent reforms to the popular Media Studies A-level have seen the subject savagely “strangled”.

Yet Jowell’s argument for media education has never been more relevant. “It is important,” she insisted, “that we know when we are watching ‘accurate and impartial’ news coverage and when we are not”. These are prescient comments when you consider that they were made more than a decade before “post-truth” became the Oxford Dictionary’s Word of the Year (in 2016) and when terms such as “fake news” or “Leveson Inquiry” had yet to pass anyone’s lips.

Jowell believed that education in media opened opportunities that could enrich the experience of individuals and society – but she was equally exercised about the role that education had to play in protecting against some of the dangers of modern media. She thought that media were dominated by powerful and potentially harmful commercial and political interests. She believed that children, in particular, should be provided with “critical life skills” to guide their media consumption.

“It is transparently important,” she told a media literacy seminar in 2004, “that they should be helped to get the most from all those screen hours, and be protected from what we know are some of the worst excesses”. She went on to ensure that, from 2006, the BBC Charter also contained requirements to promote media literacy.

Where did it go so wrong?

The key to understanding the marginalisation of media literacy as government policy is the role of the Department for Education – once known as the Department for Education and Skills(DfES). Media education was not seen as a serious curriculum priority at the DfES, and – despite New Labour’s early insistence on “joined-up government” – enthusiasm for media literacy never spread beyond the confines of DCMS.

There was widespread ignorance about media education among civil servants within DfES, many of whom had had highly traditional educational experiences themselves. A preoccupation with “driving up” standards, measurability and international comparison provided little incentive for the promotion of a field of study concerned with recognising and understanding forms of popular (or “low”) culture. This was despite the apparent economic value being attributed to the “creative industries” at the same time.




Read more:
Tessa Jowell’s call for greater access to experimental cancer treatments is right – here’s why


The byzantine operation of the DfES also made change of any kind difficult – particularly where it touched on what was actually taught in schools. In this case, there was the added disincentive of a policy being driven by a separate –and junior – department. Ultimately, media literacy was never to be widely embraced as an educational project in the way that Jowell had hoped.

Media literacy remains on the statute book and Ofcom continues to have a responsibility to promote it. But the way it is defined – and the level of resources provided to support it – ensure that it has largely been reduced to a form of market research, an undead policy. Jowell once proclaimed:

I believe that in the modern world, media literacy will become as important a skill as maths or science. Decoding our media will become as important to our lives as citizens as understanding literature is to our cultural lives.

It may be too much to hope that media literacy could yet be reclaimed as one of Tessa Jowell’s essential legacies.


Richard Wallis, Principal Academic in Media Production, Faculty of Media & Communication, Bournemouth University

This article was originally published on The Conversation. Read the original article.

How to hunt a giant sloth – according to ancient human footprints

File 20180423 133859 v905gz.jpg?ixlib=rb 1.1

: Alex McClelland, Bournemouth University

By Matthew Robert Bennett, Bournemouth University; Katie Thompson, Bournemouth University, and Sally Christine Reynolds, Bournemouth University.

Rearing on its hind legs, the giant ground sloth would have been a formidable prey for anyone, let alone humans without modern weapons. Tightly muscled, angry and swinging its fore legs tipped with wolverine-like claws, it would have been able to defend itself effectively. Our ancestors used misdirection to gain the upper hand in close-quarter combat with this deadly creature.

What is perhaps even more remarkable is that we can read this story from the 10,000-year-old footprints that these combatants left behind, as revealed by our new research published in Science Advances. Numerous large animals such as the giant ground sloth – so-called megafauna – became extinct at the end of the Ice Age. We don’t know if hunting was the cause but the new footprint evidence tells us how human hunters tackled such fearsome animals and clearly shows that they did.

White Sands National Monument. Matthew Bennett, Bournemouth University, Author provided

These footprints were found at White Sands National Monument in New Mexico, US, on part of the monument that used by the military. The White Sands Missile Range, located close to the Trinity nuclear site, is famous as the birth place of the US space programme, of Ronald Reagan’s Star Wars initiative and of countless missile tests. It is now a place where long-range rather than close-quarter combat is fine-tuned.

Tracking the footprints. Matthew Bennett, Bournemouth University, Author provided

It is a beautiful place, home to a huge salt playa (dry lake) known as Alkali Flat and the world’s largest gypsum dune field, made famous by numerous films including Transformers and the Book of Eli. At the height of the Ice Age it was home to a large lake (palaeo Lake Otero).

As the climate warmed, the lake shrank and its bed was eroded by the wind to create the dunes and leave salt flats that periodically pooled water. The Ice Age megafauna left tracks on these flats, as did the humans that hunted them. The tracks are remarkable in that they are only a few centimetres beneath the surface and yet have been preserved for over 10,000 years.

Footprint comparison. David Bustos, National Park Service

Here there are tracks of extinct giant ground sloth, of mastodon, mammoth, camel and dire wolf. These tracks are colloquially known as “ghost tracks” as they are only visible at the surface during specific weather conditions, when the salt crusts are not too thick and the ground not too wet. Careful excavation is possible in the right conditions and reveals some amazing features.

Perhaps the coolest of these is a series of human tracks that we found within the sloth prints. In our paper, produced with a large number of colleagues, we suggest that the humans stepped into the sloth prints as they stalked them for the kill. We have also identified large “flailing circles” that record the sloth rising up on its hind legs and swinging its fore legs, presumably in a defensive, sweeping motion to keep the hunters at bay. As it overbalanced, it put its knuckles and claws down to steady itself.

Plaster cast footprints. David Bustos, National Park Service

These circles are always accompanied by human tracks. Over a wide area, we see that where there are no human tracks, the sloth walk in straight lines. Where human track are present, the sloth trackways show sudden changes in direction suggesting the sloth was trying to evade its hunters.

Piecing together the puzzle, we can see how sloth were kept on the flat playa by a horde of people who left tracks along the its edge. The animals was then distracted by one stalking hunter, while another crept forward and tried to strike the killing blow. It is a story of life and death, written in mud.

Matthew Bennett, dusting for prints. David Bustos, National Park Service

What would convince our ancestors to engage is such a deadly game? Surely the bigger the prey, the greater the risk? Maybe it was because a big kill could fill many stomachs without waste, or maybe it was pure human bravado.

At this time at the end of the last Ice Age, the Americas were being colonised by humans spreading out over the prairie plains. It was also a time of animal extinctions. Many palaeontologists favour the argument that human over-hunting drove this wave of extinction and for some it has become an emblem of early human impact on the environment. Others argue that climate change was the true cause and our species is innocent.

It is a giant crime scene in which footprints now play a part. Our data confirms that human hunters were attacking megafauna and were practiced at it. Unfortunately, it doesn’t cast light on the impact of that hunting. Whether humans were the ultimate or immediate cause of the extinction is still not clear. There are many variables including rapid environmental change to be considered. But what is clear from tracks at White Sands is that humans were then, as now, “apex predators” at the top of the food chain.


Professor Matthew Robert Bennett, Professor of Environmental and Geographical Sciences, Bournemouth University; Katie Thompson, Research Associate, Bournemouth University, and Dr Sally Christine Reynolds, Senior Lecturer in Hominin Palaeoecology, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Sunken Nazi U-boat discovered: why archaeologists like me should leave it on the seabed

File 20180425 175054 1jvt9dj.jpg?ixlib=rb 1.1

Sea War Museum

By Innes McCartney, Bournemouth University.

The collapsing Nazi government ordered all U-boats in German ports to make their way to their bases in Norway on May 2, 1945. Two days later, the recently commissioned U-3523 joined the mission as one of the most advanced boats in the fleet. But to reach their destination, the submarines had to pass through the bottleneck of the Skagerrak – the strait between Norway and Denmark – and the UK’s Royal Air Force was waiting for them. Several U-boats were sunk and U-3523 was destroyed in an air attack by a Liberator bomber.

U-3523 lay undiscovered on the seabed for over 70 years until it was recently located by surveyors from the Sea War Museum in Denmark. Studying the vessel will be of immense interest to professional and amateur historians alike, not least as a way of finally putting to rest the conspiracy theory that the boat was ferrying prominent Nazis to Argentina. But sadly, recovering U-3523 is not a realistic proposition. The main challenges with such wrecks lie in accurately identifying them, assessing their status as naval graves and protecting them for the future.

U-boat wrecks like these from the end of World War II are the hardest to match to historical records. The otherwise meticulous record keeping of the Kriegsmarine (Nazi navy) became progressively sparser, breaking down completely in the last few weeks of the war. But Allied records have helped determine that this newly discovered wreck is indeed U-3523. The sea where this U-boat was located was heavily targeted by the RAF because it knew newly-built boats would flee to Norway this way.

Identification

The detailed sonar scans of the wreck site show that it is without doubt a Type XXI U-boat, of which U-3523 was the only one lost in the Skagerrak and unaccounted for. These were new types of submarines that contained a number of innovations which had the potential to make them dangerous opponents. This was primarily due to enlarged batteries, coupled to a snorkel, which meant they could stay permanently underwater. Part of the RAF’s mission was to prevent any of these new vessels getting to sea to sink Allied ships, and it successfully prevented any Type XXI U-boats from doing so.

The Type XXI U-3008. Wikipedia

With the U-boat’s identity correctly established, we now know that it is the grave site of its crew of 58 German servicemen. As such, the wreck should either be left in peace or, more implausibly, recovered and the men buried on land. Germany lost over 800 submarines at sea during the two world wars and many have been found in recent years. It is hopelessly impractical to recover them all, so leaving them where they are is the only real option.

Under international law all naval wrecks are termed “sovereign immune”, which means they will always be the property of the German state despite lying in Danish waters. But Denmark has a duty to protect the wreck, especially if Germany asks it to do so.

Protection

Hundreds of wartime wreck sites such as U-3523 are under threat around the world from metal thieves and grave robbers. The British cruiser HMS Exeter, which was sunk in the Java Sea on May 1, 1942, has been entirely removed from the seabed for scrap. And wrecks from the 1916 Battle of Jutland that also lie partly in Danish waters have seen industrial levels of metal theft. These examples serve as a warning that organised criminals will target shipwrecks of any age for the metals they contain.

Detailed sonar scans have been taken. Sea War Museum

Germany and the UK are among a number of countries currently pioneering the use of satellite monitoring to detect suspicious activity on shipwrecks thought to be under threat. This kind of monitoring could be a cost-effective way to save underwater cultural heritage from criminal activity and its use is likely to become widespread in the next few years.

Recovery

The recovery cost is only a small fraction of the funds needed to preserve and display an iron object that has been immersed in the sea for many years. So bringing a wreck back to the surface should not be undertaken lightly. In nearly all cases of salvaged U-boats, the results have been financially ruinous. Lifting barges that can raise shipwrecks using large cranes cost tens of thousands of pounds a day to charter. Once recovered, the costs of conservation and presentation mount astronomically as the boat will rapidly start to rust.

The U-boat U-534 was also sunk by the RAF in 1945, close to where U-3523 now lies. Its crew all evacuated that boat, meaning that she was not a grave when recovered from the sea in 1993 by Danish businessman Karsten Ree, allegedly in the somewhat incredible belief that it carried Nazi treasure. At a reported cost of £3m, the operation is thought to have been unprofitable. The boat contained nothing special, just the usual mundane objects carried on a U-boat at war.

U-534 after the rescue. Les Pickstock/Flickr, CC BY

Similar problems were experienced by the Royal Navy Submarine Museum in the UK when it raised the Holland 1 submarine in 1982. In that case, the costs of long-term preservation proved much greater than anticipated after the initial rust-prevention treatment failed to stop the boat corroding. It had to be placed in a sealed tank full of alkali sodium carbonate solution for four years until the corrosive chloride ions had been removed, and was then transferred to a purpose-built exhibition building to protect it further.

The expensive process of raising more sunken submarines will add little to our knowledge of life at sea during World War II. But each time a U-boat is found, it places one more jigsaw piece in its correct place, giving us a clearer picture of the history of the U-boat wars. This is the true purpose of archaeology.


Innes McCartney, Leverhulme Early Career Fellow, Department of Archaeology, Anthropology and Forensic Science, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Fit for nothing: where it all went wrong for Glasgow’s Commonwealth Games legacy

File 20180413 127631 123pllx.jpg?ixlib=rb 1.1

PA, CC BY-SA

By Lynda Challis, Bournemouth University

“Our vision is to host a successful, safe and secure Games that deliver a lasting legacy for the whole of Scotland, and to maximise the opportunities in the run up to, during, and after the Games.”

This was the promise made by the Scottish government to the Commonwealth in 2014. In the 12 days of competition that followed, the city of Glasgow achieved a “hero-like status”, Team Scotland achieved its biggest-ever medal haul of 53 medals, and the games recorded the highest number of tickets sold for a sporting event in Scottish history.

Minister for sport Aileen Campbell hailed the event as a huge success by announcing that Glasgow’s Commonwealth Games was the largest sporting and cultural event ever held in Scotland and had changed the lives of thousands of people.

The message from the host nation was clear: the games were not just about showcasing elite athletes, but about delivering a legacy that would provide a flourishing economy, celebrate cultural diversity, embrace sustainable living, and create a more physically active nation. But four years on, not all those ambitions have been achieved.

Getting a nation off the couch

The games were considered a golden opportunity for Scotland to harness the power of sport to motivate a sedentary nation. A ten-year implementation plan was launched in 2014 to tackle physical inactivity across Scotland as well as myriad other initiatives to support communities in improving the local sporting infrastructure.

Two and a half years after the games, an interim report by the Scottish parliament’s Health and Sport Committee was undertaken to assess the progress made in increasing physical activity levels across Scotland.

The report concluded that there was no evidence of an active legacy being achievable. More alarmingly, any evidence of a relationship between the hosting of a major sporting event and raising the host nation’s physical activity levels was inconclusive.

This raises serious questions as to why such an ambitious legacy aim was included in the first place given the likelihood of failure. It could be that the Scottish government included the aim of increasing participation within its legacy pledge as a desperate attempt to address Scotland’s poor health profile, one of the worst in Europe.

Glasgow’s east end, the main site of the 2014 Commonwealth Games, is considered one of the poorest urban areas in Europe. Chris Perkins/Flickr, CC BY-SA

A final evaluation report on the impact of the Glasgow 2014 Commonwealth Games published by the Scottish government days before the opening ceremony of the Gold Coast 2018 Commonwealth Games highlighted the harsh reality that the active legacy programme had not “resulted in a step change in population levels of physical activity in Scotland”.

In fact, the GoWell East study that tracked participant levels within the surrounding area of Glasgow found that overall rates had actually declined, with just over 53% achieving the recommended physical activity levels in 2016, compared to 62% in 2012.

However, the east end community surrounding the main games site is one of the most deprived areas in Scotland, with some of the worst statistics in Europe for child poverty, health, crime, and alcohol and drug abuse. This could account for the declines in physical activity levels in the east end of Glasgow as the underlying reasons behind social inequalities in sports participation is poverty – not having the income to spend on sport.

Policy fail

But Glasgow is not alone. Other nations hosting major sporting events have failed to capitalise on the perception that a sprinkling of magic over a big sports event will motivate a population to become active. Data that tracked participation levels of Australians before, during and after the Sydney 2000 Olympic Games found they had declined, due – ironically – to Australians spending more time watching sport on TV than taking part themselves.

Undoubtedly, many nations believe that elite sporting success and the hosting of major sporting events on home turf can encourage mass involvement, and in turn create an active nation. An example of this is London’s 2012 Olympic Games, which promised to “do something no other Olympic Games host nation had done before”: inspire a new generation of young people to get involved, get active and take part in sport. This bold statement from the UK government has since been questioned, because in fact, no previous games had even attempted to leverage improved physical activity as a legacy outcome.

Despite their glossy success, London’s Olympics also failed to improve rates of participation in sport. PA, CC BY-SA

It became abundantly clear post-London 2012 that the Olympic Legacy promise had failed to come to fruition with figures showing no more young people taking part in sport than before the games. As has been argued elsewhere, there is still a lack of robust evidence to suggest that the presumed trickle-down effect of hosting a major sporting event can trigger an increase in physical activity.

Big spend but no return

The failure of London 2012 and Glasgow 2014 to create and inspire a nation to get active is not really surprising. For more than 40 years, community sports policy in Britain has been plagued by failings to meet physical activity performance indicators set by governments.

This could be down to a variety of factors including: poor policy analysis to inform future policy-making decisions; overambitious or naïve participation targets; inadequate resources to deliver long-term programmes; and changes in direction leading to ambiguity regarding who is responsible for delivery.

Given these issues, it is understandable that grass-roots sport policies and major sporting events have failed to encourage more people to get active. Future government policy on community sport needs to have an all-party group commitment, that is evidence-based to ensure objectives are realistic. It needs to have a long-term plan and be adequately funded to ensure that there are real and lasting results.

In the end, we have to face a difficult truth: governments continue to invest in costly elite sport and big extravagant sporting events that come at the expense of community sport.


Lynda Challis, Academic in Sports Development, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Hungary elections: it’s the most popular party on Facebook, so why haven’t you heard of the Two-Tailed Dog?

File 20180409 114084 uo115m.jpg?ixlib=rb 1.1

EPA/Tibor Illyes

By Annamaria Neag, Bournemouth University and Richard Berger, Bournemouth University

With more than 278,000 followers on Facebook, Hungary’s Two-Tailed Dog Party was the the most popular party on social media to stand in the country’s 2018 election. However, its online popularity did not help win seats in the vote which delivered Viktor Orbán a third term as prime minister by a landslide. In an anti-establishment approach, the Dogs’ campaign was carried out entirely by volunteers and official campaign funds were used to support community projects.

Despite only coming away with 1.71% of the votes, however, the party has pushed an important boundary in Hungarian politics.

Puppy training

The Two-Tailed Dog Party was founded in 2006, although formal recognition didn’t come until 2014. It defined itself as a joke party from the start, becoming famous for making fun of other political groups – mainly the mainstream Fidesz, led by Orbán.

Its activities range from street art to graffiti to urban gardening. It even smuggles soap and toilet paper into hospitals in order to highlight the dire state of some healthcare facilities. In 2016, the party crowdfunded €100,000 to cover the country in satirical posters mocking the government’s call to vote against EU refugee quotas in an impending referendum.

Then in 2018, just a couple of weeks before the deadline, the party managed to get enough signatures to be able to participate in the national parliamentary elections. The jokers were getting serious.

A Two-Tailed Dog sticker appears on a Budapest lamp post.

In an election campaign dominated by the supposed “threat” posed by immigration and the perceived influx of migrants to Hungary, the Two-Tailed Dog party used social media to draw attention to a statistic published on the national police website showing that one migrant had been “caught” in the last 30 days. Its satirical response to this shocking figure read: “There is an enormous interest in our country. But we cannot rest assured: The migrant entered our country.”

Domestication

All political parties use emotions to persuade people to vote for them. The Two-Tailed Dog party and its kind are trying to undermine establishment organisations by turning humour into political action.

In a process social scientists call “kynicism”, the Two-Tailed Dog party borrowed and remixed government messages for its own aims. The idea is to mock the government’s rhetoric in order to disperse fear and anxiety.

In Hungary, it’s unclear what the future holds for the Two-Tailed Dog party, or these joke parties more broadly. There is a fundamental mismatch between the way everyday politics works and the vision of the party.

Party leader Gergő Kovács told us:

I can’t really tell how many of our Facebook fans would vote for us … To be honest, for me the parliamentary elections are not important. For me, it’s much more important to see what we can do … I have to confess: my aim is to create something creative and funny, and yet meaningful … I think it is useless to have one more opposition party that has a serious programme. I have no interest to do politics in the traditional way.

If the case of Iceland’s Pirate party shows us anything, it is that parties like the Two-Tailed Dog have a tendency to lose their edge once they gain political influence. In 2016 the pirates topped opinion polls, and seemed to become a real political force by winning ten seats in the parliament. However, in the latest elections, they won only six seats.

Alternative parties, like the Two-Tailed Dog exist to mock from outside the mainstream. But what’s the point of a political party if it doesn’t really want to get elected and to introduce its policies?

For now, that’s not a question the Two-Tailed dogs need to answer, since they failed to make it into parliament.

But the group has nonetheless radically re-energised young people. It has tested the limits of convention in Hungary’s political process. Kovács told us that when it comes to larger campaigns, “two thirds, or three quarters, of our ideas come from the people … For instance, we write an economic programme, post it to Facebook and in a couple of minutes, there are three to four better ideas in the comments, so we take it down and add these ideas. So, in fact it really comes from the people”. The next step is for the group to translate those likes on social media into actual votes.


Annamaria Neag, Marie Curie Research Fellow, Bournemouth University and Richard Berger, Associate Professor, Head of Research and Professional Practice, Department of Media Production, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Can the cricketers banned for ball tampering ever regain their hero status? It’s happened before

File 20180328 189824 1cte335.jpg?ixlib=rb 1.1

Steve Smith has borne the brunt of the public and media vitriol over Australian cricket’s ball-tampering scandal. EPA/Muzi Ntombela

By Keith Parry, Western Sydney University and Emma Kavanagh, Bournemouth University

Overnight, Cricket Australia handed out its promised “significant sanctions” for a ball-tampering incident that has engulfed the sport in scandal. Steve Smith and David Warner, the team’s captain and vice-captain, have been banned for 12 months. Cameron Bancroft, who carried out the failed plot, received a nine-month ban.

It was also revealed it was sandpaper, and not “yellow tape and the granules from the rough patches of the wicket” as originally claimed, that Bancroft tried to use to alter the ball’s condition in the Test match between South Africa and Australia.

While the International Cricket Council (ICC) initially suspended Smith for only one Test, all three are now banned from international and domestic (professional) cricket in Australia. Smith and Warner have also had their lucrative Indian Premier League contracts torn up, and some sponsors have already distanced themselves from the players and the sport. But these measures fall short of the lifetime bans some called for.

As captain, Smith has borne the brunt of the public and media vitriol, particularly as he accepted responsibility for what had happened. He may yet be Australian captain again in the future.

But according to Cricket Australia’s investigation, it was Warner who developed the plan and instructed Bancroft – a younger player – to carry it out. Warner also showed a “lack of contrition” and will therefore not be considered for any leadership position in the future.

Does the punishment fit the crime?

Ball tampering is clearly cheating; it breaks the rules and is against the “spirit of cricket”. But while it has been deemed the “moral equivalent of doping”, there is a lack of consistency in how sanctions are dished out to offenders.



Read more:
Just not cricket: why ball tampering is cheating


Bans for doping violations are often severe. Players such as Andre Russell have been banned for 12 months for failing to record their whereabouts for drug testing. But, historically, ICC bans for ball tampering have been more lenient: Pakistan’s Shahid Afridi received a two-game ban for biting the ball in an attempt to alter its condition.

Pakistan’s Shahid Afridi’s bite-tampering incident.

However, a harder line has been taken for incidents of match-fixing. Three Pakistan players were banned and jailed for a spot-fixing incident in 2010. South Africa’s Herschelle Gibbs received a six-month ban in 2000 for agreeing to fix a match, even though he did not follow through with it.

Lifetime bans are not uncommon in sport generally. Ryan Tandy was banned for life for attempted spot-fixing in a rugby league game. Lance Armstrong was banned from sanctioned Olympic sports for life and had his results voided for his serial doping in cycling. Even figure skating is not immune: Tonya Harding was similarly banned for hindering the prosecution into a vicious attack on a fellow competitor.

It is difficult to compare sanctions across sports. But, when doing so, the inconsistencies are apparent. Boxer Mike Tyson was handed a 15-month ban for biting off part of Evander Holyfield’s ear; footballer Luis Suarez received an eight-game ban for racially abusing an opponent; fellow footballer Paul Davis only served a nine-match ban for punching and breaking an opponent’s jaw.

In light of these punishments, are nine- and 12-month bans for premeditated cheating and lying reasonable and just?

Cricket Australia has been criticised for the time it took to reach a decision. But it’s essential that due diligence is done and facts are gathered before a sentence is handed down. Without this, decisions are made through the pressure of public shaming, and social media get to cast the final vote on the punishment.

If sporting organisations want players to act morally on field, then they too should be guided by moral behaviour in governing the sport.

Sport Player Offence Sanction
Athletics Ben Johnson Doping Two-year ban and stripped of titles; lifetime ban after second offence
Rugby league Ryan Tandy Spot-fixing Lifetime ban from playing in the NRL
Rugby league Cronulla Sharks players Doping 12-month bans (backdated)
Australian football 34 Essendon players Doping 12-month bans
Baseball Shoeless Joe Jackson Alleged match-fixing Lifetime ban
Figure skating Tonya Harding Hindering prosecution into attack on fellow figure skater Lifetime ban
Cycling Lance Armstrong Doping Banned from sanctioned Olympic sports for life and results voided
Boxing Mike Tyson Biting opponent’s ear off 15-month ban
Association football Luis Suarez Racial abuse Eight-game ban
Association football Luis Suarez Biting opposition player Four-month ban

Forgive and forget?

Society is often keen to forgive top athletes when they transgress. When athletes admit their mistakes and ask forgiveness it is usually granted.

Over time, sports fans also tend to forget athletes’ errors and focus solely on their on-field ability. In cricket, for instance, Don Bradman’s role in disputes over pay as a cricket administrator is largely glossed over. Shane Warne’s year-long ban for a doping violation is rarely mentioned.

Drugs cheats are accepted (and sometimes welcomed) back into sport – some even after multiple doping offences.

In many sports, athletes’ chequered pasts are ignored in favour of their on-field ability. It is often the actions that come as a result of their behaviour that are judged, and not the infringement itself.

Athletes frequently transgress, but their subsequent redemption is often woven into the narrative around them. Stories around sporting heroes follow several patterns, but the most recognised is the hero’s journey. The “hero” sets out on a quest but is faced by a crisis or descends into a hellish underworld. They “heroically” overcome these challenges and ultimately return to glory.



Read more:
Are you monomythic? Joseph Campbell and the hero’s journey


In this instance, Smith, Warner and Bancroft are in a hell of their own making. If they manage to return, and do so triumphantly, then it is likely they will be forgiven – and some may even forget their role in this sorry affair. Only time will tell whether they will again be considered heroic.


Keith Parry, Senior Lecturer in Sport Management, Western Sydney University and Emma Kavanagh, Senior Lecturer in Sports Psychology and Coaching Sciences, Bournemouth University

This article was originally published on The Conversation. Read the original article.

Just not cricket: why ball tampering is cheating

File 20180325 54875 1oeehy7.jpg?ixlib=rb 1.1

In happier times: Cameron Bancroft and Steve Smith talk to the media during the victorious Ashes series. AAP/Darren England

By Keith Parry, Western Sydney University; Emma Kavanagh, Bournemouth University, and Steven Freeland, Western Sydney University

Australian cricket is engulfed in scandal after TV cameras caught Cameron Bancroft attempting to manipulate the condition of the ball during the team’s third Test match against South Africa. Bancroft and the Australian captain, Steve Smith, subsequently admitted to the offence and the collusion of the player leadership group in the decision to do so.

Altering the condition of the match ball is against the rules of the sport, contrary to “the spirit of cricket”, and deemed to be “unfair”. It is a form of cheating.

What is ball tampering?

Cricket is not only controlled by a set of rules but, according to the sport’s laws, it should also be played “within the spirit of cricket”.

Like most sports, cricket is a self-regulating entity. The national associations and, ultimately, the International Cricket Council (ICC) enforce the laws. That said, cricket remains tied to gentlemanly ideals and the myth of “fair play”.

This “spirit” encourages respect for players and officials while advocating for self-discipline. Significantly, it says the:

… major responsibility for ensuring fair play rests with the captains.

Within these rules, law 41.3 identifies changing the condition of the match ball as an offence and “unfair play”. Specifically, law 41.3.2 states:

It is an offence for any player to take any action which changes the condition of the ball.

But why is the condition of the ball so important?

The ability to “swing” a ball is a prized skill in cricket. Altering the condition of one side of the ball can help it to swing, and may provide an advantage to the bowling team.



Read more:
Video explainer: Bowling strategies and decision-making in cricket


Players try regularly try to “rough up” one side of the ball by, for instance, deliberately bouncing it on hard ground or applying sweat or saliva to it in ingenious ways. Such practices are not deemed to be contrary to the laws, even if they may not be within the spirit of cricket. Cricketers can bend the rules but not break them.

However, others have been known to use fingernails to scratch the ball, or have rubbed it on the zip of their trousers. Such measures are against the laws and are punishable under the ICC’s Code of Conduct.

In this case, Smith has been banned for one match and fined his match fee. Bancroft, who was caught with a piece of yellow sticky tape that he was attempting to use to tamper with the ball, has also been fined most of his fee and issued three demerit points.

Risk and reward

When games are evenly matched, small gains from cheating can be enough to swing the result one way. This has occurred in other sports.

Sport is now a commercial product with large rewards for winning. In addition, when players are representing their country, there may be considerable pressure to win at all costs, particularly when sport plays a prominent role in the country’s national identity.

According to Smith, the Australians “saw this game as such an important game”. Here, the significance of the game and the team’s desire to win are used to justify cheating. The spirit of cricket and “fair play” were given little thought.

In his work on match-fixing, investigative journalist Declan Hill identifies several questions that may be considered when players are contemplating cheating. The importance of the game is a key factor. Prospective cheats will also evaluate whether they can win without cheating and the sanctions they risk if they are caught.

The Australian cricketers believed the game was slipping away from them. They either did not think they would be caught, or were not deterred by the possible sanctions.

Leading by example

In several cases of cheating, it has been senior players that have induced their younger teammates to cheat.

Two former cricket captains, South Africa’s Hansie Cronje and Pakistan’s Salman Butt, both recruited younger, less experienced players in their attempts to manipulate cricket matches. Similarly, Bancroft is at the start of his Test career and appears to have been influenced by others in the team.

Rather than ensuring fair play, Smith contrived to break both the game’s laws and spirit. Worryingly, it was not just Smith and Bancroft, but a group of senior players who were initially involved.

The players will have evaluated whether it was morally right to cheat and decided that winning was more important. While not a “crime” in the traditional sense of the word, the premeditated nature of these actions increases the level of deception and subsequent outrage surrounding the decision.

The event calls into question not only the behavioural integrity of those involved but also more broadly the moral integrity of the environment in which they function. This is an environment that leaves players viewing ball-tampering on this scale as a viable match-winning strategy.

Smith’s role, as captain, has often been described as the second-most-important job in Australia (after the prime minister). It is for this reason that the Australian Sports Commission has called for him, along with any members of the leadership group or coaching staff “who had prior awareness of, or involvement in, the plan to tamper with the ball”, to stand down or be sacked.

The plot to tamper with the ball was a clear attempt to cheat and has brought the spirit of cricket into question. The implications of being caught cheating or significance of the action were overruled in favour of an outcome: winning a match.

Such actions demonstrate the short-term focus players can have in the moment, ignoring the magnitude of their decisions. In this case, the fallout will be far greater than any punishment the sport will hand out.


Keith Parry, Senior Lecturer in Sport Management, Western Sydney University; Emma Kavanagh, Senior Lecturer in Sports Psychology and Coaching Sciences, Bournemouth University, and Steven Freeland, Dean, School of Law and Professor of International Law, Western Sydney University

This article was originally published on The Conversation. Read the original article.