Dr Samuel Nyman, Department of Medical Science & Public Health, has recently had an article published in The Conversation on the health benefits of Tai Chi. This includes reference to his recently completed NIHR-funded Tai Chi study, The TACIT Trial.
As the presentation of the 2020 Academy Awards approaches, there has been a lot of buzz around the visual effects category. Two films – Sam Mendes’s 1917 and Martin Scorsese’s The Irishman have, in particular, attracted a lot of attention for the tricks they use to immerse the viewer in the characters and storyline.
The first film to win an award for visual effects, in the first ever Oscars ceremony in 1929, also won best picture. American special effects artist and film director Roy Pomeroy won for Wings, a first world war movie featuring breathtaking realistic dogfight sequences. His work still looks amazing, given the tools he had to work with. In the 90 years since he won his award, though, visual effects have become ever more sophisticated.
Big bangs theory
If we take a look at the films that are nominated for Best Visual Effects in this year’s Academy Awards, we see five very different types of film.
Star Wars: The Rise of Skywalker is the continuing sci-fi saga of the battle between the Jedi and the Sith. A set of tried-and-tested visual effects techniques were used in the film.
This included the return of a fully digital replacement for Princess Leia using pieces of old footage of the late Carrie Fisher and computer-generated elements to create a complete character that blended seamlessly into the new narrative. Most of the environments were created in the computer and then composited with actors’ performances against a green screen that allows backgrounds to be replaced with digital sets.
Avengers: Endgame, is the final episode of a comic book-based world of superheroes and their enemies in one final, epic battle. Green screens played a huge part in this film as well, allowing intricate digital environments to play their part in the storytelling.
As you’d expect there are plenty of pyrotechnics, explosions and battle scenes that were made with animated digital characters.
Rumble in the jungle
The Lion King, a computer-generated remake of the Disney classic, originally animated, on the whole, by hand in 2D. Many of the techniques used in this movie were originally developed for the making of the 2016 remake of Jungle Book which, like The Lion King, was reworked as a fully digital film – apart from Mowgli who was played by a real boy.
In The Lion King, director John Favreau developed a technique that he felt would inform the animation of the animals in a far more realistic way than how animation is traditionally created. Rather than simply recording voice actors in a sound booth, he put them in a studio and filmed them acting together so that animators had nuanced reference to work with to ensure the tiniest of reactions were captured in the creatures’ performances.
Virtual reality also played a big part in the making of the film. Camera operators were able to use digital sets to see the environments and move digital cameras in a realistic way.
The Irishman jumps between present-day action and as far back as the 1950s, made more complicated by the fact that the characters are played by the same actors. The point of difference is that prosthetics and makeup weren’t used, but stars including Robert De Niro and Al Pacino were “de-aged” using computers, using images of the actors from photographs and previous films to build “digital masks” in the computer that replaced the actor’s real faces.
This meant that De Niro who plays the lead role was, at 74 when filming began, playing the role of a man in his 30s and by the end of the film the same man in his 80s. How successfully is something that has been hotly debated – but nobody can doubt the expertise with which the artists carried out their task.
Spot the joins
The final film nominated is the first world war epic 1917, co-written, produced and directed by Sam Mendes. Loosely based on a story Mendes was told by his grandfather, the film relies on a single shot depiction of the entire narrative, following the main character on his journey to get a message to the front line. This technique, also used in 2015’s best picture winner, Birdman, required meticulous planning to ensure that the cuts that occurred were invisible to the viewer.
Camera moves were choreographed to allow two scenes that were filmed in the same location at different times to be taken into the computer and “stitched” together as if they were one complete shot. Doing this over and over enabled the illusion of one continuous sequence.
Like many films though, 1917 used a host of other visual effects techniques that were unseen. This is often regarded as the pinnacle of success in visual effects – an effect that can’t be seen versus one that is smacking you in the face with a large, wet fish.
Appliance of science
Some of the nominated movies need visual effects to create worlds and creatures that don’t exist, while some employ tricks to enhance the cinematic experience and the ability of the filmmaker to tell their story. All of them use the technical expertise of visual effects artists to bring the director’s vision to the screen.
And there’s a great deal of scientific knowhow that goes into creating cinematic illusion. The movie that won the visual effects award in 2014, Interstellar, involved recreating the appearance of a black hole. To do this, visual effects artists worked with scientists to accurately model the phenomenon. The results were so advanced that scientists have since cited its importance to their ongoing work.
This scientific knowledge underpins flawless visual effects production. Not only does a visual effects artist need to know how their tools work, they need to be able to understand the science that informs the visuals we see on the screen. Human and animal anatomy, lighting, pyrotechnics, fluid simulation, mechanical engineering and robotics are just a few of the scientific disciplines that add strings to a visual effects artist’s bow.
So, when we talk about visual effects and the people who create them, remember the science that supports almost everything they do. Every frame is looked at in minute detail, so much so that the casual viewer might never understand the hours that go into making one of these films look the way they do and allow us to sit back and enjoy the story.
Many of today’s politicians appear to appeal to the basic human need for safety, presenting their versions of strong leadership as the best hope for order and safety in a fearful world of growing instability and risk. Much evidence confirms that this appeal is certainly an important factor in the political landscape.
But alongside this, other psychological dynamics are currently influential in a number of Western democracies – particularly in attracting people to support populist leaders and their agendas.
One of these – which is of particular relevance to the impeachment trial of the US president, Donald Trump – concerns the pleasure and excitement that some citizens appear to find in a leader who breaks rules and ignores taboos. These transgressions can come in various forms, such as controversial statements, unconventional lifestyles or disrespectful approaches to the political process. But they can also extend to improper activities and abuse of power – such as those detailed in the impeachment charges against Trump – or anti-democratic activity and violence.
I suggest that support for this kind of leader can be understood as “identification with the transgressor”. This is an idea modelled on the concept of “identification with the aggressor”, a term coined by the psychoanalyst Anna Freud in 1936. Since then, psychologists have used the concept to understand a range of behaviours, including our tolerance of or collusion with bullies.
Different types of transgressive leader can appeal to transgressive parts of ourselves. Like others before him, the psychoanalyst Sigmund Freud, Anna’s father, observed that some measure of resentment towards authority and of a longing to cast aside the rules, is a universal feature of the human psyche. In its development since Freud, the psychoanalytic tradition has examined how this longing is a legacy of the painful process of emotional development we each undergo very early in life as we come to accept the limits placed on us as requirements for membership of human society.
Where there are good reasons to think that normal political processes are failing, many people can feel a surge of gratitude towards a leader who breaks with some conventions with the aim of bringing more integrity and legitimacy to political life. Lech Wałęsa in Poland and Vaclav Havel in Czechoslovakia, and others who led the way out of totalitarianism for countries in the Communist bloc, were certainly transgressors within the political worlds they confronted. They could be identified as a force for good in a corrupt or sclerotic system.
But given our built-in ambivalence towards authority and rules, we can also identify with political leaders whose transgressions are driven at least in part by more destructive impulses. While promising their supporters a better world, these leaders use rhetoric that focuses on the urgent need to attack existing authorities and destroy existing arrangements, with little real attention paid to how to replace them.
One example is a coup leader who, once in power, has little plan for bettering their country. At worst is the leader free of most if not all moral constraint, who is contemptuous of international standards of conduct, and unconcerned by the human costs of his or her own conduct.
Impact on voters
Therefore, one psychological question hanging over the US impeachment proceedings is the extent to which Trump’s support base will judge him negatively over the events at the centre of the impeachment trial. When Americans head to the polls in November 2020, how many will be inclined to enjoy Trump’s truculent dismissal of any criticism, and his capacity to brazen it out?
Remember, evidence of Trump’s questionable moral conduct was available to the US electorate in 2016. Following the release before the election of a videotape in which he boasted about groping women without their consent, 91% of those likely to vote for Trump said in a CBS/YouGov poll that the tape didn’t change their view of him. And Trump was elected.
The refusal by many voters to censure Trump for his transgressions has a powerful psychological basis to it in the wish to break free of authority. This can also be enjoyed without the guilt that would, for most people, usually accompany an assault on widely held values.
That’s because a leader like Trump offers an opportunity to combine transgressive pleasure with the moral high ground. This emotional package is offered to those who identify with Trump’s (somewhat erratic) self-presentation as a fusion of pleasure-seeking rebel and visionary saviour, leading an insurrection against the corrupt authorities – “the swamp”.
The eulogistic book on Trump by Conservative commentator Ann Coulter is one of many demonstrations of how much his supporters are energised by the wish to attack the “establishment” for their own alleged transgressions. Of course, not all Trump supporters feel this way, or support him for the same reasons.
This populist attack on the established elite can enable the supporters of the transgressive leader to feel that they are on a moral crusade, as well there for a pleasure kick. This could be a powerful aid to Trump in the coming election. We should expect such a transgressor figure to continue attracting strong identification and support, unless challenged by a leader who can somehow disrupt the transgressor’s psychological relationship with their support base.
In the hard-nosed world of journalism, admitting to suffering from Post-Traumatic Stress Disorder (PTSD) has traditionally been taboo – a sign of weakness never to be admitted to colleagues in the newsroom where the remedy was often a stiff drink or two. Despite repeated efforts over the past decade to draw attention to the dangers of mental illness faced by foreign correspondents, that stigma has not gone away.
It can only be hoped that may change now that one of the BBC’s most high-profile correspondents, Fergal Keane, has shared publicly the PTSD he has been tussling with privately for several years.
The BBC announced that after decades of covering conflict, its veteran war reporter would be changing his role from that of Africa editor to “further assist his recovery”. The corporation’s head of newsgathering, Jonathan Munro, said: “It is both brave and welcome that he is ready to be open about PTSD.”
Keane is not the first correspondent by any means to have shared in public the impact that covering a relentless diet of conflict, crisis and disaster can have on even the most resilient human being. His BBC colleague Jeremy Bowen, Middle East editor, spoke about his own diagnosis of PTSD in 2017, characterised by bouts of depression related to his work.
Renowned foreign correspondents such as Janine di Giovanni have also written movingly about the effects of PTSD. In her 2011 memoir, Ghosts by Daylight, she confessed that crisis had become normality and “this real life, with all its sharp edges was terribly difficult”.
Almost two decades ago, research by South African psychologist Anthony Feinstein underscored the importance of efforts to introduce structured trauma training and counselling into news organisations. His first major study of 140 war journalists published in 2002 found that they had significantly more psychiatric difficulties than journalists who did not report on war.
In particular, the lifetime prevalence of PTSD in journalists who cover war was similar to rates reported for combat veterans, while the rate of major depression exceeded that of the general population. In 2018, Feinstein conducted a retrospective study of PTSD data collected over 18 years from journalists who have covered conflict across four continents.
Between 1999 and 2017, data had been collected from 684 journalists covering stories ranging from the 9/11 attacks and the Arab Spring to drug wars in Mexico and the refugee crisis in Europe. The data showed that the majority of the correspondents did not display prominent symptoms of PTSD at any one moment in time. But over a longer time-frame (many correspondents were spending well over a decade covering conflict) the data confirmed that rates of the full PTSD syndrome can approach those experienced by those engaged in actual combat – and he cautioned that news organisations could not afford to be complacent when it came to their duty of care.
Large news organisations such as the BBC and Reuters have made great strides in recognising the issues associated with PTSD and providing both training and support. This has been reinforced by the work of a US-based global charity, the Dart Centre for Journalism & Trauma which offers a range of best practice guidelines and resources to safeguard the mental well-being of journalists.
This is not just about those on the frontline of foreign reporting. Almost every journalist will end up covering traumatic news events in their career – whether this be sexual violence, traffic accidents, or criminal trials. Most recently, there is a growing awareness of the dangers posed to journalists in the newsroom monitoring incoming, raw user-generated content from the sites of conflict, terror and disaster worldwide – what has been dubbed the “[digital frontline]”.
It is a point that was highlighted in a 2015 survey by Eyewitness Media Hub. This major study on the issue surveyed 122 journalists around the world and concluded that:
Office-bound staff who used to be somewhat shielded from viewing atrocities are now bombarded day in and day out with horrifically graphic material that explodes onto their desktops in volumes, and at a frequency that is very often far in excess of the horrors witnessed by staff who are investigating or reporting from the actual frontline.
Slowly but surely, journalism courses at universities in the UK are becoming aware of the importance of trauma training before students enter this professional environment. We would like to think that the work we are doing at Bournemouth University through both education, research and professional practice – in conjunction with the Dart Centre, BBC and others – is starting to make a difference.
The aim is to create an awareness of how people caught up in traumatic news might react and how to conduct ethical interviews with victims and survivors of trauma. In addition, we feel it is only responsible to make our journalism students aware of the mental stresses that journalists are exposed to whether on the frontline or in the newsroom.
That doesn’t mean we should assume that every journalist who covers a distressing news story or handles sensitive material will develop PTSD. But it is important to do our best to build resilience and develop coping strategies so that journalists can bounce back stronger from the impact of covering distressing news.
As Keane’s case illustrates, PTSD can often present itself long after events. He has spoken and written about the effects that covering the 1994 Rwanda genocide had on him. It can only be hoped that the courage Keane has displayed in moving himself away from the frontline will send a signal that it is acceptable to recognise mental health issues in journalism.
Far from turning his back on the profession, according to the BBC he intends to “continue to provide original and compelling journalism” and hopes to draw on his experiences to guide and nurture young journalists. This can only be positive for the next generation of journalists.
In the last few years, the use of 3D printing has exploded in medicine. Engineers and medical professionals now routinely 3D print prosthetic hands and surgical tools. But 3D printing has only just begun to transform the field.
Today, a quickly emerging set of technologies known as bioprinting is poised to push the boundaries further. Bioprinting uses 3D printers and techniques to fabricate the three-dimensional structures of biological materials, from cells to biochemicals, through precise layer-by-layer positioning. The ultimate goal is to replicate functioning tissue and material, such as organs, which can then be transplanted into human beings.
We have been mapping the adoption of 3D printing technologies in the field of health care, and particularly bioprinting, in a collaboration between the law schools of Bournemouth University in the United Kingdom and Saint Louis University in the United States. While the future looks promising from a technical and scientific perspective, it’s far from clear how bioprinting and its products will be regulated. Such uncertainty can be problematic for manufacturers and patients alike, and could prevent bioprinting from living up to its promise.
From 3D printing to bioprinting
Bioprinting has its origins in 3D printing. Generally, 3D printing refers to all technologies that use a process of joining materials, usually layer upon layer, to make objects from data described in a digital 3D model. Though the technology initially had limited applications, it is now a widely recognized manufacturing system that is used across a broad range of industrial sectors. Companies are now 3D printing car parts, education tools like frog dissection kits and even 3D-printed houses. Both the United States Air Force and British Airways are developing ways of 3D printing airplane parts.
In medicine, doctors and researchers use 3D printing for several purposes. It can be used to generate accurate replicas of a patient’s body part. In reconstructive and plastic surgeries, implants can be specifically customized for patients using “biomodels” made possible by special software tools. Human heart valves, for instance, are now being 3D printed through several different processes although none have been transplanted into people yet. And there have been significant advances in 3D print methods in areas like dentistry over the past few years.
While bioprinting is not entirely a new field because it is derived from general 3D printing principles, it is a novel concept for legal and regulatory purposes. And that is where the field could get tripped up if regulators cannot decide how to approach it.
State of the art in bioprinting
Scientists are still far from accomplishing 3D-printed organs because it’s incredibly difficult to connect printed structures to the vascular systems that carry life-sustaining blood and lymph throughout our bodies. But they have been successful in printing nonvascularized tissue like certain types of cartilage. They have also been able to produce ceramic and metal scaffolds that support bone tissue by using different types of bioprintable materials, such as gels and certain nanomaterials. A number of promising animal studies, some involving cardiac tissue, blood vessels and skin, suggest that the field is getting closer to its ultimate goal of transplantable organs.
We expect that advancements in bioprinting will increase at a steady pace, even with current technological limitations, potentially improving the lives of many patients. In 2019 alone, several research teams reported a number of breakthroughs. Bioengineers at Rice and Washington Universities, for example, used hydrogels to successfully print the first series of complex vascular networks. Scientists at Tel Aviv University managed to produce the first 3D-printed heart. It included “cells, blood vessels, ventricles and chambers” and used cells and biological materials from a human patient. In the United Kingdom, a team from Swansea University developed a bioprinting process to create an artificial bone matrix, using durable, regenerative biomaterial.
Though the future looks promising from a technical and scientific perspective, current regulations around bioprinting pose some hurdles. From a conceptual point of view, it is hard to determine what bioprinting effectively is.
Consider the case of a 3D-printed heart: Is it best described as an organ or a product? Or should regulators look at it more like a medical device?
Regulators have a number of questions to answer. To begin with, they need to decide whether bioprinting should be regulated under new or existing frameworks, and if the latter, which ones. For instance, should they apply regulations for biologics, a class of complex pharmaceuticals that includes treatments for cancer and rheumatoid arthritis, because biologic materials are involved, as is the case with 3D-printed vaccines? Or should there be a regulatory framework for medical devices better suited to the task of customizing 3D-printed products like splints for newborns suffering from life-threatening medical conditions?
In Europe and the U.S., scholars and commentators have questioned whether bioprinted materials should enjoy patent protection because of the moral issues they raise. An analogy can be drawn from the famed Dolly the sheep over 20 years ago. In this case, it was held by the U.S. Court of Appeals for the Federal Circuit that cloned sheep cannot be patented because they were identical copies of naturally occurring sheep. This is a clear example of the parallels that exist between cloning and bioprinting. Some people speculate in the future there will be ‘cloneprinting,’ which has the potential for reviving extinct species or solving the organ transplant shortage.
Dolly the sheep’s example illustrates the court’s reluctance to traverse this path. Therefore, if, at some point in the future, bioprinters or indeed cloneprinters can be used to replicate not simply organs but also human beings using cloning technologies, a patent application of this nature could potentially fail, based on the current law. A study funded by the European Commission, led by Bournemouth University and due for completion in early 2020 aims to provide legal guidance on the various intellectual property and regulatory issues surrounding such issues, among others.
On the other hand, if European regulators classify the product of bioprinting as a medical device, there will be at least some degree of legal clarity, as a regulatory regime for medical devices has long been in place. In the United States, the FDA has issued guidance on 3D-printed medical devices, but not on the specifics of bioprinting. More important, such guidance is not binding and only represents the thinking of a particular agency at a point in time.
Cloudy regulatory outlook
Those are not the only uncertainties that are racking the field. Consider the recent progress surrounding 3D-printed organs, particularly the example of a 3D-printed heart. If a functioning 3D-printed heart becomes available, which body of law should apply beyond the realm of FDA regulations? In the United States, should the National Organ Transplant Act, which was written with human organs in mind, apply? Or do we need to amend the law, or even create a separate set of rules for 3D-printed organs?
We have no doubt that 3D printing in general, and bioprinting specifically, will advance rapidly in the coming years. Policymakers should be paying closer attention to the field to ensure that its progress does not outstrip their capacity to safely and effectively regulate it. If they succeed, it could usher in a new era in medicine that could improve the lives of countless patients.
Online misinformation works, or so it would seem. One of the more interesting statistics from the 2019 UK general election was that 88% of advertisements posted on social media by the Conservative Party pushed figures that had already been deemed misleading by the UK’s leading fact-checking organisation, Full Fact. And, of course, the Conservatives won the election by a comfortable margin.
Internet firms such as Facebookand Google are taking some steps to limit political misinformation. But with Donald Trump aiming for reelection in 2020, it seems likely we’ll see just as many false or misleading statements online this year as in the past. The internet, and social media in particular, has effectively become a space where anyone can spread any claim they like regardless of its veracity.
Yet to what degree do people actually believe what they read online, and what influence does misinformation really have? Ask people directly and most will tell you they don’t trust the news they see on social media. And a landmark study in 2019 found 43% of social media users admitted to sharing inaccurate content themselves. So people are certainly aware in principle that misinformation is common online.
But ask people where they learned about the “facts” that support their political opinions, and the answer will often be social media. A more complex analysis of the situation suggests that for many people the source of political information is simply less important than how it fits with their existing views.
Research into the UK Brexit referendum and 2017 general election found that voters often reported making their decisions based on highly spurious arguments. For example, one voter argued that Brexit would stop the takeover of the British high street by foreign companies such as Costa Coffee (which was British at the time). Similarly, a Remain voter spoke of mass deportations of any non-UK born resident if the country left the EU, a much more extreme policy than anything actually put forward by politicians during the campaign.
During the 2017 election, various claims were made by survey respondents that unfairly questioned Conservative leader Theresa May’s humanity. For example, some falsely argued she enacted laws that led to flammable cladding being placed on the exterior of Grenfell Tower, the London block of flats that caught fire in June 2017, killing 72 people. Others called her Labour opponent Jeremy Corbyn a terrorist sympathiser, or a victim of a conspiracy to discredit him by the military and industrial elites. The common thread was that these voters gained the information to support their arguments from social media.
How do we explain the apparent paradox of knowing social media is full of misinformation and yet relying on it to form political opinions? We need to look more widely at what has become known as the post-truth environment. This involves a scepticism of all official sources of news, a reliance on existing beliefs and biases formed from deeply held prejudices, and a search for information that confirms bias as opposed to critical thinking.
People judge information on whether they find it believable as opposed to whether it is backed by evidence. Sociologist Lisbet van Zoonen calls this the replacement of epistemology – the science of knowledge – with “i-pistemology” – the practice of making personal judgements.
A lack of trust in elite sources, in particular politicians and journalists, doesn’t fully explain this large-scale rejection of critical thinking. But psychology can provide some potential answers. Daniel Kahneman and Amos Twersky developed a series of experiments that explored under what conditions humans are most likely to jump to conclusions about a specific topic. They argue intelligence has little impact on making ill-informed judgements.
Intelligence tests demonstrate the capacity to perform logical reasoning, but cannot predict that it will be performed at every moment it is needed. As I have argued, we need to understand the context of people’s decisions.
The average undecided voter is bombarded with arguments from political leaders, especially in marginal seats or swing states that can make a difference to the outcome of an election. Every politician offers a redacted account of their or their opponents’ policies. And voters are aware that each of these politicians is trying to persuade them and so they retain a healthy scepticism.
The average voter also has a busy life. They have a job, perhaps a family, bills to pay and hundreds of pressing issues to address in their daily lives. They know the importance of voting and making the right decision but struggle to navigate the contested election communication they receive. They want a simple answer to that age-old conundrum, who most or who least deserves my vote.
So instead of conducting a systematic critical analysis of every piece of evidence they encounter, they look for specific issues that they see as driving a wedge between the competing politicians. This is where fake news and disinformation can be powerful. As much as we like to think we’re good at spotting fake news and being sceptical of what we’re told, we’re ultimately susceptible to whatever information makes it easiest to make a decision that seems right, even if in the long term it may be wrong.
Dr Mili Shrivastava based on her research in Women Entrepreneurs in UK and India published an article on Indian women Entrepreneurs in The Conversation. The article outlined how women entrepreneurs are creating businesses based on environmental problems while creating opportunities for sections of society.
The article has reached far and wide across continents and was widely shared on social media.
BU is celebrating Global Entrepreneurship Week, for the first time, on the 19th of November with not ONE but TWO Mega Events! Supporting student experience; supporting BU commitment to the UN Sustainable Development Goals; and providing a platform to bring together wonderful examples of the power of enterprise in changing society.
Women in Entrepreneurship: An extraordinary panel of Women from various sectors and UK and Beyond, we have a number of Famous faces on the panel as well as women who are quietly making a huge impact on society and the economy; helping break down gender barriers to entrepreneurial activities. I am immensely proud to introduce the panel and the 3 wonderful ladies from Brazil who are also going to join us (see attached pic).The Women in Entrepreneurship Panel has been possible due to the support of funding from the Women’s Academic Network (WAN); ACORN award(Public dissemination of research); and Faculty of Management (Executive Dean Dr Lois Farquharson)
Venue- KG01 Time- 1245-1630
Also, on the 19th we are bringing SOUP to BU..what’s that you ask? BH SOUP (modelled on the Detroit SOUP movement) has been running successfully in the conurbation for the last few years and this year, to celebrate GEW and to harness the energy of the newly launched BU Social Entrepreneurs Forum, BH SOUP is coming to BU with BH SOUP Loves Social Enterprises. This event too is possible due to the Faculty of Management (Dr Lois Farquharson).
Venue- Fusion Building Ground Floor space- Time 1845-2100
Please see the eventbrite links below to register (for FREE) at the event(s)
As polls closed in Greece on July 7, with pollsters predicting a convincing victory for the centre-right New Democracy and a defeat for the left-wing Syriza government of Alexis Tsipras, an unusual sense of calm prevailed across the country. Rarely has a Greek election night been so quiet.
New Democracy’s incoming prime minister, Kyriakos Mitsotakis, went out of his way to unite and manage expectations. His supporters were just relieved to have ousted Tsipras. Tsipras himself looked relieved, having managed to reverse his party’s losses at the recent European Parliament elections, to win a respectable 31.5% of the vote, which will allow him to remain as a strong second pole in the system. With 39.9% of the vote, New Democracy will have 158 seats in the Greek parliament, an outright majority.
Smaller parties all put on a happy face for their own internal reasons, with the exception of the neo-Nazi Golden Dawn, which failed to pass the 3% threshold to elect MPs. It looked as if Greece had finally attained what it had been desperately seeking for one long decade: a sense of normalcy.
Exactly ten years ago, in the summer of 2009, the first signs that Greece was in economic trouble started to become apparent. As the markets’ confidence in Greek bonds collapsed, the government turned to the European Union and the International Monetary Fund (IMF). Within weeks it had entered a vortex of excruciating negotiations, conditional bailouts and tough austerity measures that went on and on. To an extent these are still going on and, in different forms, are expected to go on for much of the 21st century.
It’s hard to overstate the impact of the crisis and austerity on Greek society. Beyond the obvious effects – unemployment reaching 25%, hundreds of thousands of mostly young and well-educated people leaving the country to seek employment abroad, pensions and public services facing severe cuts – the political system was rattled. One of the two main pillars of the post-1974 system, the centre-left PASOK, collapsed. Far right parties such as Golden Dawn and the xenophobic, homophobic Independent Greeks – entered parliament.
The crisis has been the single biggest challenge to Greece’s survival since World War II. Its root causes, the way Greeks were stereotyped in the world’s media, and the way lenders and successive Greek governments designed and implemented austerity measures, all became sources of collective shame and humiliation. This in turn polarised a political culture that has been historically prone to bouts of instability and violence.
Rise of violence
Tsipras weaponised and normalised this populist narrative of victimhood, pitting the “innocent people” against “the corrupt elites”, including Greece’s EU partners and lenders. As I have shown in my research, this narrative was also used by far-left radical groups to justify revenge and aggression.
Political violence tripled between 2008 and 2018. Far-left violence was 3.5 times bigger in scale than far-right violence, which itself soared. Low-level incidents are a daily occurrence with thousands of them having taken place during the decade of the crisis, especially before Syriza got into power. Radicalisation and extremism have been particularly prominent among young people. While many are politically apathetic, those who do engage tend to do so in radical ways. Golden Dawn drew most of its supporters from the 18-25 age group, while Syriza has consistently topped the polls in that group.
The January and September 2015 victories of Syriza, which governed in alliance with the Independent Greeks, acted as pressure valves that allowed Greek society to vent a lot of its anger and frustration. That radicalism, which was such a prominent element of Greek political culture during the first period of the crisis, gradually ran out of steam.
From January to June 2015, Yanis Varoufakis, the flamboyant poster boy of the “Caviar left”, led catastrophic and slightly surreal negotiations with EU and IMF lenders. These ended up costing Greece billions of euros, triggered a bank run and capital controls, caused it to default on its debts to the IMF and brought it within hours of exiting the Eurozone. Eventually, Tsipras did a U-turn and, in late 2015 began implementing all of the lenders’ requests, effectively showing that there really was no alternative to austerity.
Since being elected leader of New Democracy in 2016, Mitsotakis worked hard to renew his party. In the space of three years, he managed to turn an out of touch, old-school, conservative party into a modern, liberal, social media savvy electoral machine. While banking on his image as a well-educated and professionally successful technocrat who will cut taxes and facilitate foreign direct investment, he also placed strategic emphasis on the youth vote.
He voted in favour of civil partnerships for same-sex couples and spent time meeting with drug addicts in rough parts of Athens. He also carried out a radical renewal of New Democracy’s parliamentary candidates and party workers, promoting many people in their 20s, 30s and 40s. In doing so, he managed to build up support in the crucial 18-24 demographic, reaching 27%-30% in the recent elections, and so ending Syriza’s monopoly on the youth vote.
Whether Greece has really entered a new era of normalcy will become apparent fairly soon. One of the first moves Mitsotakis pledged to take is to scrap the so-called “asylum” law, which forbids police from entering university premises. As a result of the law, urban university campuses have become hotspots of crime, vandalism, drug-dealing and anarchist propaganda and public opinion has recently shifted in favour of taking action. However, far-left groups still carry street credit in universities and in the urban pocket of Exarchia in downtown Athens, where law-and-order has completely collapsed.
On election day in Greece, the only incident that broke the peaceful hum of post-election dinner parties took place there: a previously unknown anarchist group stole and burnt a ballot box, threatened electoral clerks and threw tear gas. What happens at Exarchia over the next few months – whether and how the government decides to enforce the law and how young people and wider society react – will be the best indicator of whether Greece has truly turned the page.
British Airways (BA) has received a record fine of £183m after details of around 500,000 of its customers were stolen in a data breach in summer 2018. The fine was possible thanks to new rules introduced last year by the EU’s General Data Protection Regulation (GDPR), which gave the British regulator powers to impose much larger penalties on companies that fail to protect their customers’ data.
But fines like these don’t just act as a business deterrent because of their financial cost. They are a method of public shaming that we can use as a form of social control to force companies to act more ethically. And research on consumer behaviour has demonstrated that social (dis)approval can be a more powerful motivator than financial factors.
The public nature of the fine is embarrassing for BA, as it reminds the public of the data breach and delivers an official verdict that the company was at fault. The huge size of the fine also indicates how serious the breach was. As a result, BA will rightly be worried about what damage the fine might do to its reputation.
Reputation is a valuable commodity for companies, and in some instances can be more important to consumers than the price of products when they are choosing who to buy from. We tend to make simplistic conclusions about the people and groups around us based on their behaviour, a phenomenon known as fundamental attribution error. This suggests a fine could lead consumers to conclude that if a company cannot protect its data – regardless of whether it has any value – then it should not be trusted on other aspects of its operations.
Although GDPR has hugely increased the size of the penalties for breaches, BA isn’t the first organisation the UK has publicly fined for breaking data protection rules, and others include Facebook, Uber and the Royal Mail. Given the importance of reputation to companies, there’s a chance these organisations would have rather accepted a higher fine in exchange for the amount not being made public.
Establishing social norms
The fine won’t just have an impact on BA either. Online data breaches are relatively new phenomena, but this sort of public shaming is an old method of social control. It sets and reinforces social norms and standards about what all organisations should be expected to be able to achieve, a message that can be intended for both businesses and the public.
My research has shown how social norms have a powerful influence over people’s behaviours and attitudes. We judge ourselves and others in relation to adherence to our collective perceptions of how we, as a society, believe we should be performing.
It’s not easy for a society to reach a consensus on what a social norm should be for a new phenomenon, especially in situations where we are uncertain about our own degree of knowledge and understanding. For most people, hacking and hackers remain a relatively murky and ill-defined threat that is hard to define or quantify, and the dangers of having your data released into the wild aren’t easy to see.
But there is evidence that consumers are becoming more concerned about businesses that do not keep their data secure, particularly after the introduction of GDPR. High-profile businesses receiving major fines could help spur this process further.
But that’s not the end of the story. At the time of the breach, BA described it as a “sophisticated, malicious, criminal attack”. This sort of narrative implies it’s difficult for organisations to protect themselves against highly motivated and technically skilled criminals. Hollywood portrayals of hackers as hoodie-wearing lone geniuses support this idea that it’s impossible for any organisation to fully prevent attacks.
While not exactly putting a positive spin on a company’s involvement in a data breach, this idea does limit the damage done to its reputation. It assumes that organisations are already doing everything they can reasonably do to protect their systems and customers.
Hacker communities take a very different position, arguing that many large organisations fail to take the basic steps that could be expected of them, despite having the resources to do so. If this is the case, we can expect to see more companies hit by penalties that could be even larger (the UK’s rules allow fines of up to 4% of a company’s turnover).
But social norms are fluid. What can seem shocking or extreme at one moment can quickly become the new normal. Heavy fines always cause financial pain to organisations, but if they become widely used and publicly reported then there’s a risk that they become seen as the cost of doing business, as arguably has happened with fines relating to health and safety. This would make fines less damaging to a company’s reputation and so less useful in forcing firms to do their best to protect customer data.
As such, only a strategic use of fines will help the public see how serious it is when organisations fail to live up to the data standards our new laws have set. If this is achieved then it may help the public understand the seriousness of data security, and in turn take greater responsibility over their own safety online.
All rivers lead to the sea, which is why it is important to consider the health of both our rivers and oceans. To celebrate the United Nations’ World Oceans Day 2019 (https://www.un.org/en/events/oceansday/) Genoveva Esteban and Dan Franklin from BU’s Department of Life and Environmental Sciences, with help from Thomas Hardye Academy’s 6th Form students and science teachers, showed Prince of Wales (Dorchester) Year 1 pupils how to study the invertebrate indicators of clean water, by kick sampling. Not only is it an important sampling method, it’s great fun too! Pupils were thrilled with their findings and recognised the significance of keeping rivers clean. The activity took place at the River Laboratory near Wareham (Dorset). Dr John Davy-Bowker (BU Visiting Fellow and Freshwater Biological Association Fellow) is gratefully acknowledged for his help throughout the day.
How will climate change remake our world in the 21st century? Will we be able to adapt and survive? As with many things, the past is a good guide for the future. Humans have experienced climate changes in the past that have transformed their environment – studying their response could tell us something about our own fate.
Human populations and cultures died out and were replaced throughout Eurasia during the last 500,000 years. How and why one prehistoric population displaced another is unclear, but these ancient people were exposed to climate changes that changed their natural environment in turn.
We looked at the region around Lyon, France, and imagined how Stone Age hunter gatherers 30,000-50,000 years ago would have fared as the world around them changed. Here, as elsewhere in Eurasia during colder periods, the environment would have shifted towards tundra-like vegetation – vast, open habitats that may have been best suited for running down prey while hunting. When the climate warmed for a few centuries, trees would have spread – creating dense woods which favour hunting methods involving ambush.
How these changes affected a population’s hunting behaviour could have decided whether they prospered, were forced to migrate, or even died out. The ability of hunter gatherers to detect prey at different distances and in different environments would have decided who dominated and who was displaced.
Short of building a time machine, finding out how prehistoric people responded to climate change could only be possible by recreating their worlds as virtual environments. Here, researchers could control the mix and density of vegetation and enlist modern humans to explore them and see how they fared finding prey.
Surviving in the virtual Stone Age
We designed a video game environment and asked volunteers to find red deer in it. The world they explored changed to scrub and grassland as the climate cooled and thick forest as it warmed.
The participants could spot red deer at a greater distance in grassland than in woodland, when the density of vegetation was the same. As vegetation grew thicker they struggled to detect prey at greater distances in both environments, but more so in woodland. Prehistoric people would have faced similar struggles as the climate warmed, but there’s an interesting pattern that tells us something about human responses to change.
Creeping environmental change didn’t affect deer spotting performance in the experiment until a certain threshold of forest had given way to grassland, or vice-versa. Suddenly, after the landscape was more than 30% forested, participants were significantly less able to spot deer at greater distances. As an open environment became more wooded, this could have been the tipping point at which running down prey became a less viable strategy, and hunters had to switch to ambush.
This is likely the critical moment at which ancient populations were forced to change their hunting habits, relocate to areas more favourable for their existing techniques, or face local extinction. As the modern climate warms and ecosystems change, our own survival could become threatened by these sudden tipping points.
The effects of climate change on human populations may not be intuitive. Our lifestyles may seem to continue working just fine up until a certain point. But that moment of crisis, when it does arrive, will often dictate the outcome – adapt, move or die.
Huawei’s role in building new 5G networks has become one of the most controversial topics in current international relations. The US is exercising direct diplomatic pressure to stop states from using the Chinese telecoms giant. The US government regards Huawei as a clear and present danger to national security and argues that any ally opting for Huawei will compromise vital intelligence sharing among these countries in the future.
When assessing the risks of having Huawei involved in building 5G infrastructure, it’s important to consider not just the security risk from Huawei, but also the wider context of international relations. It’s important to first recognise that China is a major cyber-power.
It is in the context of China’s growing cyber-power that Huawei is seen as a risky business partner when it comes to developing critical infrastructure, such as a new 5G network. Huawei may insist that it is an independent company that does not have ties to the Chinese government, but this is not how it looks to Western powers. According to the CIA, Huawei has received funding from both the Chinese army and Chinese state intelligence. Plus, it does not help that Huawei’s founder, Ren Zhengfei was once an engineer in the Chinese army and that the company’s ownership lies with a “trade union committee” that is appointed by the state.
Then there’s China’s National Intelligence Law of 2017, which requires Chinese companies “to provide necessary support, assistance and cooperation” with national intelligence work, if called upon. So Huawei’s assurances that it will not hand over customer data to the government are difficult to trust. All the more so given China’s track record of using private actors for the purposes of spying.
Backdoors and vulnerabilities
If a country’s 5G network is compromised, this could open it up to a number of risks. First, there’s simply access to information that is transmitted across the network. More worryingly, the “internet of things” will be built on 5G. Everyday devices will all be connected – from driverless cars to smart fridges, speakers and traffic signals.
This opens the possibility for a determined actor (whether state or non-state) to control these important processes. A cyber-attack via 5G infrastructure could lead to significant damage to property and even loss of life, and would amount to an armed attack under international law.
The UK’s National Cyber Security Centre (NCSC) has a dedicated Huawei Cyber Security Evaluation Centre. Its 2019 report found no evidence of Chinese state interference or the deliberate introduction of “backdoors” that could be used to siphon off information. But it does criticise Huawei’s technology for being generally vulnerable to attack. The potential risks, however, apply to any equipment vendor that the UK may choose to use instead of Huawei.
In light of the current US government’s tough stance on China, in terms of trade and security, it is fair to ask if the present US warnings have more to do with denying market access to a strong competitor than security concerns? If so, the UK may have to decide whether it values its relations with the US or China more. As well as the security risks that Huawei may pose, the UK must consider the importance of maintaining its information sharing arrangement with the US and the other “Five Eyes” countries, Australia, New Zealand and Canada.
The trust issue will always remain with Huawei because of its proximity to the Chinese government. But, after the UK’s top spies said Huawei could be “managed” in terms of potential security risks, the main risk at the moment seems to be diplomatic. Namely, repercussions with Washington and the potential backlash regarding a post-Brexit trade deal and suspension of intelligence sharing. With China potentially becoming a global adversary to the West as a whole (not just the US), the UK should bear in mind which side it is choosing when deciding who builds its 5G infrastructure.
Guardian journalist George Monbiot wrote a damning critique of the BBC and Sir David Attenborough’s wildlife documentaries in late 2018, arguing that they do little to illustrate the huge environmental issues faced by the natural world.
Since then, Attenborough has adopted a much stronger position. He spoke at both the UN Climate Summit and the World Economic Forum in Davos, and used his platform to highlight the threats of climate change.
Embarking on a new collaboration, Attenborough and the BBC are set to confirm their position in a one-off documentary entitled Climate Change – The Facts, airing on April 18. The 90-minute film will explain the effects that climate change has already had and the disasters it might cause in future.
Although it’s crucial to raise awareness among the public about the impacts and threats of climate change, it’s equally important to explain how to fight it. That’s something the BBC has been more quiet about.
Solutions to climate change
The recent series Blue Planet Live featured a segment on the Great Barrier Reef in which it stated that coral bleaching is the result of climate change. That places the BBC in line with the scientific consensus. The same episode later described the “heroic research” effort that is needed to save the world’s reefs from coral bleaching, and covered the capture and transfer of coral spawn to a new location.
In an era when schoolchildren are striking for climate action and radical proposals for climate action are entering the political mainstream, the BBC’s timidity towards even discussing solutions seems odd.
Covering these arguments is political but goes way beyond party politics and certainly wouldn’t breach impartiality guidelines. Audiences might understand that this isn’t as interesting as coral spawning being captured during a lightning storm, as was shown on Blue Planet Live. But if the BBC don’t address the solutions to climate change, then how can there be an educated public which understands that saving the planet requires more than individual gestures like carrying a reusable coffee cup?
There’s no doubt that Attenborough’s BBC documentaries have inspired millions of people around the world to take environmental issues seriously. His programmes have encouraged many of our students to undertake degrees in environmental sciences.
Their insights into the natural world can present a sense of environmental optimism that promotes action. But failing to address the political and economic solutions necessary to stop climate change means the BBC could fail to respect its own values in education and citizenship. With their new documentary, Attenborough and the BBC should challenge our current economic system – only then can they fulfil their duty to inform the public with accuracy and impartiality.
Cyber security incidents are gaining an increasingly high profile. In the past, these incidents may have been perceived primarily as a somewhat distant issue for organisations such as banks to deal with. But recent attacks such as the 2017 Wannacry incident, in which a cyber attack disabled the IT systems of many organisations including the NHS, demonstrates the real-life consequences that cyber attacks can have.
These attacks are becoming increasingly sophisticated, using psychological manipulation as well as technology. Examples of this include phishing emails, some of which can be extremely convincing and credible. Such phishing emails have led to cyber security breaches at even the largest of technology companies, including Facebook and Google.
To face these challenges, society needs cyber security professionals who can protect systems and mitigate damage. Yet the demand for qualified cyber security practitioners has quickly outpaced the supply, with three million unfilled cyber security posts worldwide.
So it might come as a surprise that there is already an active population with a strong passion for cyber security – hackers. This is a term with many negative connotations. It evokes the stereotypical image of a teenage boy sat in a dark room, typing furiously as green text flies past on the computer monitor, often with the assumption that some criminal activity is taking place. The idea of including such individuals in helping build and protect cyber systems may seem counter intuitive.
But – as we have highlighted in our recent research – the reality of hacking communities is more complex and nuanced than the stereotypes would suggest. Even the phrase “hacker” is contentious for many individuals who may be labelled hackers. This is because it has lost the original meaning: of someone who uses technology to solve a problem in an innovative manner.
There are a growing number of online hacking communities – and regular offline meetings and conventions where hackers meet in person. One of the largest of these events is DEFCON, held every year in Las Vegas and attended by up to 20,000 people. These hacking communities and events are an important source of information for young people who are becoming involved in hacking, and may be the first contact they have with other hackers.
On the surface, the conversations that are held on these forums often relate to sharing information. People seek advice on how to overcome different technical barriers in the hacking process. Assistance is given to those who are having difficulties – provided that they firstly demonstrate a willingness to learn. This reflects one of the characteristics of hacking communities, in that there is a culture of individuals demonstrating passion and the desire to overcome barriers.
But such events are about more than sharing practical skills. As individuals, we are strongly influenced by those around us, often to a greater agree that we are aware of. This is especially the case when we are in a new environment and unsure of the social norms of the group. As such, these online and offline hacking communities also provide an important source of social identity to individuals. They learn what is and what is not acceptable behaviour, including the ethics and legality of hacking.
Myths and opportunities
It is important to stress here that hacking is not an inherently illegal activity. There are many opportunities to engage in ethical hacking, which refers to attempting to hack systems for the purpose of finding and fixing the flaws that malicious hackers may try to exploit for criminal activity.
Our research demonstrates that the majority of people active within hacking communities have no wish to exploit the flaws they find although they do believe that such flaws should be exposed so that they can be addressed – especially when the organisation concerned is holding public data and have sufficient resources that it is reasonable to feel they should not have any gaps in their cyber security in the first place. Several large and well-known companies actively engage with this culture, by offering hackers “bug bounties” – financial rewards for identifying and reporting previously undiscovered weaknesses in their systems.
Of course criminal hacking does happen – and many of the people we have spoken to acknowledge that they take part in activities that are of questionable legality in order to achieve their goal of finding the flaws in a system. This creates a risk for those people, especially young adults, who are becoming involved in hacking. Through ignorance or through being wilfully misled, they may become involved in activities that result in them gaining a criminal record.
If so, this impacts not only them as an individual but also the cyber security profession. As a result of this culture, many companies are being deprived of individuals who could have helped fill the increasingly urgent gap in cyber security professionals. To address both of these problems, we need to move past unhelpful and negative stereotypes and work with young people and hacking communities to provide an awareness of how their passion and skills can be used to address the cyber security challenges that society faces.
Museums are often perceived as dusty cabinets full off dead and ancient things, especially those institutions you’ve never heard off. You know the ones, the neglected pride of county towns that could play a vital cultural and social role but struggle for funding.
For some, technology is the answer, virtually recreating museums and their contents online, or launching fancy augmented reality smartphone apps that overlay videos of the real world with interactive computer-generated content. We certainly see the potential for such apps to make museums more exciting, especially to young people, and have recently been using them to bring dinosaurs to life.
But sadly our experience suggests visitors just aren’t keen on downloading these apps. So is there another way technology can help revitalise musuems and similar attractions?
Using the phone’s camera to scan a code on a notice board or flyer brings forward a 2D computer-generated image superimposed on the phone’s live camera feed. Users can see a troop of mammoths walk over the horizon with the real landscape behind, or have their selfies taken with a mammoth. We’ve since created our own free app that recreates augmented reality dinosaurs and other extinct reptiles and mammals in 3D, without the need to scan a code.
We deployed the mammoth and a T. rex at various events in 2017 and 2018, allowing visitors to pose for selfies. The tech was embraced enthusiastically, not just by children but by older generations as well. We found the sense of technological wonder coupled with a chance to strike a silly pose with an extinct animal really appealed to the visitors.
But when we first deployed the app at a museum, in summer 2018 at the Etches Collection on Dorset’s Jurassic Coast, it challenged our thinking. In fact, it stopped us dead. When we had staff on site to show people what was possible with our own tablets and phones, the technology had an impact and people were excited to see it in action (although they did not always download the app). But no one engaged when we relied on posters and banners to encourage visitors to download and use the app.
We failed at the first step, not due to a lack of interest in the technology or in the 3D dinosaurs deployed, but due to the fundamental reluctance of visitors to download museum apps. We have since found this experience to be shared by others, such as Skybox Museum, who also struggle to get visitors to download their app deployed at their site in Manchester. In fact, the feedback we’ve received so far suggests that simply getting people to download a museum app, rather than a problem with the underlying technology, is the biggest obstacle to its success.
What makes people download apps?
To find out why, we immersed ourselves in a growing body of consumer-based research on smartphone apps. It turns out that the characteristics of an app are less important when it comes to getting people to download it than whether they trust the makers, and that brand loyalty and familiarity help build this trust. We also know that the potential for social interaction and pure enjoyment are more important than the usefulness or educational value of an app. People want to be entertained, engage with others and are wary of potential risks to their phones and personal data.
So when you’re asked to download an app at the doors of a museum, the default position is to decline. It’s a hard sell, especially if you have children in tow. Promoting the app in advance helps but, even if you overcome this reluctance, people still want a guarantee of fun.
What’s the answer? Games are an obvious possibility. Which regular museum visitor hasn’t seen a horde of children with clipboards on some form of quest or hunt? Promising a fun game is perhaps the key to getting children to try the augmented reality we know can change a museum experience.
The alternative is to make such resources available without an app, and we are exploring this. One solution might be to enable visitors to access it through their phone’s internet browser or via a standard QR code. Another idea we are trialling is to preload the technology onto a tablet hired like an audio guide at a museum’s entrance. As the software doesn’t need downloading it can be more complex, for example using locational technology such as GPS that can prompt the user to activate the device at a given spot and offer content tailored to their visit. But this would make social interaction and downloading those fun-filled selfies harder.
We believe that technology has much to offer the museums of the future. In fact, we would argue it’s essential to their survival. In particular, mixed reality, a form of enhanced augmented reality where real people and objects are displayed in virtual worlds, has some exciting potential to create immersive, engaging and educational content. But for once, the smartphone may not hold the key.