Category / BRIAN

Shaping Media Policy and Regulation

Following a business engagement event on Digital Strategy and Business Transformation and subsequent publications in academic and practitioner journals, Dr Oliver’s work on the strategic digital transformations of the UKs Creative Industries and media firms has been credited with shaping Ofcom’s media policy and regulation. Ofcom recently commented that his research into the how Sky Plc had managed the digital transition over the past twenty years provided them with a unique insight into Sky’s strategic approach, and it enabled them to “think differently about their ‘growth strategy’ and diversification into new markets such as broadband, fixed and mobile telephony”. Ofcom concluded that Sky had in fact contributed to increased levels of competition in those sectors.

Ofcom also confirmed that the research had helped them to consider their remit as a communications regulator and the potential areas where they could use their expertise in the future, most notably in terms of the potential future regulation of the internet.

Dr Oliver’s research – ‘Strategic Transformations in the Media’ can be found on BRIAN.

BRIAN Down – 11th October 2018

IT are undertaking essential maintenance to the BRIAN servers on Thursday 11th October 8am. This will involve BRIAN being unavailable to users for a short period of time.

We will communicate on the blog as soon as BRIAN is up and running again.

BRIAN Down – Thursday 11th October

IT are undertaking essential maintenance to the BRIAN servers on Thursday 11th October 8am. This will involve BRIAN being unavailable to users for a short period of time.

We will communicate on the blog as soon as BRIAN is up and running again.

Come and be a part of cutting edge Spine Biomechanics Research!

Research participants needed! 

The Centre for Biomechanics Research (located at the AECC University College, Parkwood Campus) is currently conducting a study investigating low back joint motion patterns in pain free adults. This study has National Research Ethical approval and aims to establish normal spine motion, which will support future investigations into low back pain and its possible treatments.

To collect the required data, pain free volunteers between 30 and 70 years of age are needed who are willing to have their low backs scanned with a method called ‘Quantitative Fluoroscopy’. This will take place in the AECC University College Chiropractic Clinic and takes no more than 1 hour.

Taking part in this study means that you are helping to advance science which will benefit many patients in the future. Additionally, this is an excellent opportunity for healthcare students and staff to learn more about this emerging technology.

Please contact us at cbrstudies@aecc.ac.uk if you are interested in taking part and we will send you more information about this study. We are looking for approximately 100 more volunteers, so we’d like to encourage you to spread this information to family and friends who can also be welcomed as participants.

 

Kind regards

Alan Breen (Professor of Musculoskeletal Research)
Alex Breen (Post-Doc and Technology Lead)
Emilie Claerbout (Bournemouth University Student)

 

Open Research Day – Today!!

Today colleagues will be available on both campuses to answer all your queries in regards to Open Research.

We’ll be in BG11 on Lansdowne between 9 and 12pm and FG04 on Talbot between 1 and 4pm.

Pop on down… there is cake! 🙂

 

 

 

BRIAN Downtime – Monday & Tuesday

BRIAN will be unavailable to users on Monday 30th April and Tuesday 1st May for a scheduled upgrade.

All relevant guidance notes and video guides on the Staff Intranet will be updated in due course. If you need any help using the new system or if you encounter any problems after the upgrade, please do send an email to BRIAN@bournemouth.ac.uk and a member of staff will be able to assist you.

BRIAN training sessions are also available every two months and are bookable through Organisational Development . The next session scheduled is:

  • Wednesday 20th June 2018

In the meantime, if you do have queries relating to the upgrade, please get in touch with BRIAN@bournemouth.ac.uk

BRIAN Upgrade – Next Week!

BRIAN will be upgrading to a new version next week, so will be inaccessible to users on Monday 30th April and Tuesday 1st May.
The main improvement for this upgrade is the introduction of a new Assessment module to enable more efficient REF preparation. However, we hope to also introduce more user friendly reporting over the next few months. 

All relevant guidance notes and video guides on the Staff Intranet will be updated in due course. If you need any help using the new system or if you encounter any problems after the upgrade, please do send an email to BRIAN@bournemouth.ac.uk and a member of staff will be able to assist you.

BRIAN training sessions are also available every two months and are bookable through Organisational Development . The next session scheduled is:

  • Wednesday 20th June 2018

In the meantime, if you do have queries relating to the upgrade, please get in touch with BRIAN@bournemouth.ac.uk

What is Open Access?

Open access is about making the products of research freely accessible to all. It allows research to be disseminated quickly and widely, the research process to operate more efficiently, and increased use and understanding of research by business, government, charities and the wider public.

There are two complementary mechanisms for achieving open access to research.

The first mechanism is for authors to publish in open-access journals that do not receive income through reader subscriptions.

The second is for authors to deposit their refereed journal article in an open electronic archive.

These two mechanisms are often called the ‘gold’ and ‘green’ routes to open access:

  • Gold – This means publishing in a way that allows immediate access to everyone electronically and free of charge. Publishers can recoup their costs through a number of mechanisms, including through payments from authors called article processing charges (APCs), or through advertising, donations or other subsidies.
  • Green – This means depositing the final peer-reviewed research output in an electronic archive called a repository. Repositories can be run by the researcher’s institution, but shared or subject repositories are also commonly used. Access to the research output can be granted either immediately or after an agreed embargo period.

Article first published – http://www.hefce.ac.uk/rsrch/oa/whatis/

To encourage all academic communities to consider open access publishing, Authors Alliance has produced a comprehensive ‘Understanding Open Access‘ guide which addresses common open access related questions and concerns and provides real-life strategies and tools that authors can use to work with publishers, institutions, and funders to make their works more widely accessible to all.

To access and download the guide, please follow this link – http://authorsalliance.org/wp-content/uploads/Documents/Guides/Authors%20Alliance%20-%20Understanding%20Open%20Access.pdf

For any other open access related queries, please do get in touch with Shelly Anne Stringer in RKEO.

There’s no such thing as a bad metric.

Lizzie Gadd warns against jumping on ‘bad metrics’ bandwagons without really engaging with the more complex responsible metrics agenda beneath.

An undoubted legacy of the Metric Tide report has been an increased focus on the responsible use of metrics and along with this a notion of ‘bad metrics’.  Indeed, the report itself even recommended awarding an annual ‘Bad Metrics Prize’.  This has never been awarded as far as I’m aware, but nominations are still open on their web pages.  There has been a lot of focus on responsible metrics recently.  The Forum for Responsible Metrics have done a survey of UK institutions and is reporting the findings on 8 February in London.  DORA has upped its game and appointed a champion to promote their work and they seem to be regularly retweeting messages that remind us all of their take on what it means to do metrics responsibly.   There are also frequent twitter conversations about the impact of metrics in the up-coming REF.  In all of this I see an increasing amount of ‘bad metrics’ bandwagon-hopping.  The anti-Journal Impact Factor (JIF) wagon is now full and its big sister, the “metrics are ruining science” wagon, is taking on supporters at a heady pace.

It looks to me like we have moved from a state of ignorance about metrics, to a little knowledge.  Which, I hear, is a dangerous thing.

It’s not a bad thing, this increased awareness of responsible metrics; all these conversations.  I’m responsible metrics’ biggest supporter and a regular slide in my slide-deck shouts ‘metrics can kill people!’.  So why am I writing a blog post that claims that there is no such thing as a bad metric?  Surely these things can kill people? Well, yes, but guns can also kill people, they just can’t do so unless they’re in the hands of a human.  Similarly, metrics aren’t bad in and of themselves, it’s what we do with them that can make them dangerous.

In Yves Gingras’ book, “Bibliometrics and Research Evaluation” he defines the characteristics of a good indicator as follows:

  • Adequacy of the indicator for the object that it measures
  • Sensitivity to the intrinsic inertia of the object being measured
  • Homogeneity of the dimensions of the indicator.

So, you might have an indicator such as ‘shoe size’, where folks with feet of a certain length get assigned a certain shoe size indicator. No problem there – it’s adequate (length of foot consistently maps on to shoe size); it’s sensitive to the thing it measures (foot grows, shoe size increases accordingly), and it’s homogenous (one characteristic – length, leads to one indicator – shoe size).  However, in research evaluation we struggle on all of these counts.  Because the thing we really want to measure, this elusive, multi-faceted “research quality” thing, doesn’t have any adequate, sensitive and homogeneous indicators. We need to measure the immeasurable. So we end up making false assumptions about the meanings of our indicators, and then make bad decisions based on those false assumptions.  In all of this, it is not the metric that’s at fault, it’s us.

In my view, the JIF is the biggest scapegoat of the Responsible Metrics agenda.  The JIF is just the average number of cites per paper for a journal over two years.  That’s it.  A simple calculation. And as an indicator of the communication effectiveness of a journal for collection development purposes (the reason it was introduced) it served us well.  It’s just been misused as an indicator of the quality of individual academics and individual papers.  It wasn’t designed for that.  This is misuse of a metric, not a bad metric. (Although recent work has suggested that it’s not that bad an indicator for the latter anyway, but that’s not my purpose here).  If the JIF is a bad metric, so is Elsevier’s CiteScore which is based on EXACTLY the same principle but uses a three-year time window not two, a slightly different set of document types and journals, and makes itself freely available.

If we’re not careful, I fear that in a hugely ironic turn, DORA and the Leiden Manifesto might themselves become bad (misused) metrics: an unreliable indicator of a commitment to the responsible use of metrics that may or may not be there in practice.

I understand why DORA trumpets the misuse of JIFs; it is rife and there are less imperfect tools for the job. But there are also other metrics that DORA doesn’t get in a flap about – like the individual h-index – which are subject to the same amount of misuse, but are actually more damaging.  The individual h-index disadvantages certain demographics more than others (women, early-career researchers, anyone with non-standard career lengths); at least the JIF mis-serves everyone equally.  And whilst we’re at it peer review can be an equally inadequate research evaluation tool (which, ironically, metrics have proven). So if we’re to be really fair we should be campaigning for responsible peer review with as much vigour as our calls for responsible metrics.

Bumper stickers by Paul van der Werf
Bumper stickers by Paul van der Werf (CC-BY)

 

It looks to me like we have moved from a state of ignorance about metrics, to a little knowledge.  Which, I hear, is a dangerous thing.  A little knowledge can lead to a bumper sticker culture ( “I HEART DORA” anyone?  “Ban the JIF”?) which could move us away from, rather than towards, the responsible use of metrics. These concepts are easy to grasp hold of, but they mask a far more complex and challenging set of research evaluation problems that lie beneath.  The responsible use of metrics is about more than the avoidance of certain indicators, or signing DORA, or even developing your own bespoke Responsible Metrics policy (as I’ve said before this is certainly easier said than done).

The responsible use of metrics requires responsible scientometricians.  People who understand that there is really no such thing as a bad metric, but it is very possible to misuse them. People with a deeper level of understanding about what we are trying to measure, what the systemic effects of this might be, what indicators are available, what their limitations are, where they are appropriate, how they can best triangulate them with peer review.  We have good guidance on this in the form of the Leiden Manifesto, the Metric Tide and DORA.  However, these are the starting points of often painful responsible metric journeys, not easy-ride bandwagons to be jumped on.  If we’re not careful, I fear that in a hugely ironic turn, DORA and the Leiden Manifesto might themselves become bad (misused) metrics: an unreliable indicator of a commitment to the responsible use of metrics that may or may not be there in practice.

Let’s get off the ‘metric-shaming’ bandwagons, deepen our understanding and press on with the hard work of responsible research evaluation.

 


Elizabeth Gadd

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She has a background in Libraries and Scholarly Communication research. She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion

 

 

Creative Commons LicenceOriginal content posted on The Bibliomagician reposted here with permission. Content is licensed under a Creative Commons Attribution 4.0 International License.