Category / BRIAN

Open Research Day – Today!!

Today colleagues will be available on both campuses to answer all your queries in regards to Open Research.

We’ll be in BG11 on Lansdowne between 9 and 12pm and FG04 on Talbot between 1 and 4pm.

Pop on down… there is cake! 🙂

 

 

 

BRIAN Downtime – Monday & Tuesday

BRIAN will be unavailable to users on Monday 30th April and Tuesday 1st May for a scheduled upgrade.

All relevant guidance notes and video guides on the Staff Intranet will be updated in due course. If you need any help using the new system or if you encounter any problems after the upgrade, please do send an email to BRIAN@bournemouth.ac.uk and a member of staff will be able to assist you.

BRIAN training sessions are also available every two months and are bookable through Organisational Development . The next session scheduled is:

  • Wednesday 20th June 2018

In the meantime, if you do have queries relating to the upgrade, please get in touch with BRIAN@bournemouth.ac.uk

BRIAN Upgrade – Next Week!

BRIAN will be upgrading to a new version next week, so will be inaccessible to users on Monday 30th April and Tuesday 1st May.
The main improvement for this upgrade is the introduction of a new Assessment module to enable more efficient REF preparation. However, we hope to also introduce more user friendly reporting over the next few months. 

All relevant guidance notes and video guides on the Staff Intranet will be updated in due course. If you need any help using the new system or if you encounter any problems after the upgrade, please do send an email to BRIAN@bournemouth.ac.uk and a member of staff will be able to assist you.

BRIAN training sessions are also available every two months and are bookable through Organisational Development . The next session scheduled is:

  • Wednesday 20th June 2018

In the meantime, if you do have queries relating to the upgrade, please get in touch with BRIAN@bournemouth.ac.uk

What is Open Access?

Open access is about making the products of research freely accessible to all. It allows research to be disseminated quickly and widely, the research process to operate more efficiently, and increased use and understanding of research by business, government, charities and the wider public.

There are two complementary mechanisms for achieving open access to research.

The first mechanism is for authors to publish in open-access journals that do not receive income through reader subscriptions.

The second is for authors to deposit their refereed journal article in an open electronic archive.

These two mechanisms are often called the ‘gold’ and ‘green’ routes to open access:

  • Gold – This means publishing in a way that allows immediate access to everyone electronically and free of charge. Publishers can recoup their costs through a number of mechanisms, including through payments from authors called article processing charges (APCs), or through advertising, donations or other subsidies.
  • Green – This means depositing the final peer-reviewed research output in an electronic archive called a repository. Repositories can be run by the researcher’s institution, but shared or subject repositories are also commonly used. Access to the research output can be granted either immediately or after an agreed embargo period.

Article first published – http://www.hefce.ac.uk/rsrch/oa/whatis/

To encourage all academic communities to consider open access publishing, Authors Alliance has produced a comprehensive ‘Understanding Open Access‘ guide which addresses common open access related questions and concerns and provides real-life strategies and tools that authors can use to work with publishers, institutions, and funders to make their works more widely accessible to all.

To access and download the guide, please follow this link – http://authorsalliance.org/wp-content/uploads/Documents/Guides/Authors%20Alliance%20-%20Understanding%20Open%20Access.pdf

For any other open access related queries, please do get in touch with Shelly Anne Stringer in RKEO.

There’s no such thing as a bad metric.

Lizzie Gadd warns against jumping on ‘bad metrics’ bandwagons without really engaging with the more complex responsible metrics agenda beneath.

An undoubted legacy of the Metric Tide report has been an increased focus on the responsible use of metrics and along with this a notion of ‘bad metrics’.  Indeed, the report itself even recommended awarding an annual ‘Bad Metrics Prize’.  This has never been awarded as far as I’m aware, but nominations are still open on their web pages.  There has been a lot of focus on responsible metrics recently.  The Forum for Responsible Metrics have done a survey of UK institutions and is reporting the findings on 8 February in London.  DORA has upped its game and appointed a champion to promote their work and they seem to be regularly retweeting messages that remind us all of their take on what it means to do metrics responsibly.   There are also frequent twitter conversations about the impact of metrics in the up-coming REF.  In all of this I see an increasing amount of ‘bad metrics’ bandwagon-hopping.  The anti-Journal Impact Factor (JIF) wagon is now full and its big sister, the “metrics are ruining science” wagon, is taking on supporters at a heady pace.

It looks to me like we have moved from a state of ignorance about metrics, to a little knowledge.  Which, I hear, is a dangerous thing.

It’s not a bad thing, this increased awareness of responsible metrics; all these conversations.  I’m responsible metrics’ biggest supporter and a regular slide in my slide-deck shouts ‘metrics can kill people!’.  So why am I writing a blog post that claims that there is no such thing as a bad metric?  Surely these things can kill people? Well, yes, but guns can also kill people, they just can’t do so unless they’re in the hands of a human.  Similarly, metrics aren’t bad in and of themselves, it’s what we do with them that can make them dangerous.

In Yves Gingras’ book, “Bibliometrics and Research Evaluation” he defines the characteristics of a good indicator as follows:

  • Adequacy of the indicator for the object that it measures
  • Sensitivity to the intrinsic inertia of the object being measured
  • Homogeneity of the dimensions of the indicator.

So, you might have an indicator such as ‘shoe size’, where folks with feet of a certain length get assigned a certain shoe size indicator. No problem there – it’s adequate (length of foot consistently maps on to shoe size); it’s sensitive to the thing it measures (foot grows, shoe size increases accordingly), and it’s homogenous (one characteristic – length, leads to one indicator – shoe size).  However, in research evaluation we struggle on all of these counts.  Because the thing we really want to measure, this elusive, multi-faceted “research quality” thing, doesn’t have any adequate, sensitive and homogeneous indicators. We need to measure the immeasurable. So we end up making false assumptions about the meanings of our indicators, and then make bad decisions based on those false assumptions.  In all of this, it is not the metric that’s at fault, it’s us.

In my view, the JIF is the biggest scapegoat of the Responsible Metrics agenda.  The JIF is just the average number of cites per paper for a journal over two years.  That’s it.  A simple calculation. And as an indicator of the communication effectiveness of a journal for collection development purposes (the reason it was introduced) it served us well.  It’s just been misused as an indicator of the quality of individual academics and individual papers.  It wasn’t designed for that.  This is misuse of a metric, not a bad metric. (Although recent work has suggested that it’s not that bad an indicator for the latter anyway, but that’s not my purpose here).  If the JIF is a bad metric, so is Elsevier’s CiteScore which is based on EXACTLY the same principle but uses a three-year time window not two, a slightly different set of document types and journals, and makes itself freely available.

If we’re not careful, I fear that in a hugely ironic turn, DORA and the Leiden Manifesto might themselves become bad (misused) metrics: an unreliable indicator of a commitment to the responsible use of metrics that may or may not be there in practice.

I understand why DORA trumpets the misuse of JIFs; it is rife and there are less imperfect tools for the job. But there are also other metrics that DORA doesn’t get in a flap about – like the individual h-index – which are subject to the same amount of misuse, but are actually more damaging.  The individual h-index disadvantages certain demographics more than others (women, early-career researchers, anyone with non-standard career lengths); at least the JIF mis-serves everyone equally.  And whilst we’re at it peer review can be an equally inadequate research evaluation tool (which, ironically, metrics have proven). So if we’re to be really fair we should be campaigning for responsible peer review with as much vigour as our calls for responsible metrics.

Bumper stickers by Paul van der Werf
Bumper stickers by Paul van der Werf (CC-BY)

 

It looks to me like we have moved from a state of ignorance about metrics, to a little knowledge.  Which, I hear, is a dangerous thing.  A little knowledge can lead to a bumper sticker culture ( “I HEART DORA” anyone?  “Ban the JIF”?) which could move us away from, rather than towards, the responsible use of metrics. These concepts are easy to grasp hold of, but they mask a far more complex and challenging set of research evaluation problems that lie beneath.  The responsible use of metrics is about more than the avoidance of certain indicators, or signing DORA, or even developing your own bespoke Responsible Metrics policy (as I’ve said before this is certainly easier said than done).

The responsible use of metrics requires responsible scientometricians.  People who understand that there is really no such thing as a bad metric, but it is very possible to misuse them. People with a deeper level of understanding about what we are trying to measure, what the systemic effects of this might be, what indicators are available, what their limitations are, where they are appropriate, how they can best triangulate them with peer review.  We have good guidance on this in the form of the Leiden Manifesto, the Metric Tide and DORA.  However, these are the starting points of often painful responsible metric journeys, not easy-ride bandwagons to be jumped on.  If we’re not careful, I fear that in a hugely ironic turn, DORA and the Leiden Manifesto might themselves become bad (misused) metrics: an unreliable indicator of a commitment to the responsible use of metrics that may or may not be there in practice.

Let’s get off the ‘metric-shaming’ bandwagons, deepen our understanding and press on with the hard work of responsible research evaluation.

 


Elizabeth Gadd

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She has a background in Libraries and Scholarly Communication research. She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion

 

 

Creative Commons LicenceOriginal content posted on The Bibliomagician reposted here with permission. Content is licensed under a Creative Commons Attribution 4.0 International License.

Emerald removes embargo period on all journal articles in open access repositories

Emerald has today, 26th September 2017, removed the embargo period on all Green open access. Author accepted manuscripts (AAMs or postprints) of journal articles held in open access repositories such as BURO will now be available on publication. This applies not only from today, but also to any Emerald publications currently under embargo in repositories.

Emerald Group Publishing

This is a huge advance for open access as Emerald had previously extended their embargo periods in response to the RCUK/ Finch statements on embargo periods and green open access.

Freshwater Taxonomy success!

Last week (11-15 September 2017) saw the successful delivery of the NERC-funded Advanced Training Course Freshwater Taxonomy and Field Identification Skills, awarded to Professor Genoveva F. Esteban (SciTech, Department of Life and Environmental Sciences) in collaboration with the Freshwater Biological Association (https://www.fba.org.uk/fba).  The course is free for PhD students and early-career researchers.  With a strong emphasis on training excellence and practical hands-on experience this short course offers expert tuition in freshwater fieldwork, taxonomy, and freshwater science. The course provided in-depth training on the well-established use of macro-invertebrates as the core component of freshwater bio-assessment and also included specific training in field and laboratory methods for investigating and identifying microscopic organisms like diatoms, meiofauna and protists. The participants’ feedback was outstanding; Davina Hill from the University of Cumbria tweeted “Thanks for a fascinating and inspiring course in Freshwater Taxonomy. Recommended!”

The course will also be delivered in 2018 (dates to be confirmed). Please contact Genoveva F. Esteban gesteban@bournemouth.ac.uk for further information. Photograph courtesy of Hai Luu.

Tenth anniversary PLOS ONE

On the tenth anniversary of the international Open Access journal PLOS ONE we received an email to inform us that one of our articles was among the top ten per cent of most cited articles in this journal.  The email referred to our paper ‘Factors influencing adherence to antiretroviral treatment in Nepal: A mixed-methods study’ [1].  Not bad considering that PLOS ONE has published over 4,300 articles since its inception.


Prof. Edwin van Teijlingen