Category / Research assessment

There’s no such thing as a bad metric.

Lizzie Gadd warns against jumping on ‘bad metrics’ bandwagons without really engaging with the more complex responsible metrics agenda beneath.

An undoubted legacy of the Metric Tide report has been an increased focus on the responsible use of metrics and along with this a notion of ‘bad metrics’.  Indeed, the report itself even recommended awarding an annual ‘Bad Metrics Prize’.  This has never been awarded as far as I’m aware, but nominations are still open on their web pages.  There has been a lot of focus on responsible metrics recently.  The Forum for Responsible Metrics have done a survey of UK institutions and is reporting the findings on 8 February in London.  DORA has upped its game and appointed a champion to promote their work and they seem to be regularly retweeting messages that remind us all of their take on what it means to do metrics responsibly.   There are also frequent twitter conversations about the impact of metrics in the up-coming REF.  In all of this I see an increasing amount of ‘bad metrics’ bandwagon-hopping.  The anti-Journal Impact Factor (JIF) wagon is now full and its big sister, the “metrics are ruining science” wagon, is taking on supporters at a heady pace.

It looks to me like we have moved from a state of ignorance about metrics, to a little knowledge.  Which, I hear, is a dangerous thing.

It’s not a bad thing, this increased awareness of responsible metrics; all these conversations.  I’m responsible metrics’ biggest supporter and a regular slide in my slide-deck shouts ‘metrics can kill people!’.  So why am I writing a blog post that claims that there is no such thing as a bad metric?  Surely these things can kill people? Well, yes, but guns can also kill people, they just can’t do so unless they’re in the hands of a human.  Similarly, metrics aren’t bad in and of themselves, it’s what we do with them that can make them dangerous.

In Yves Gingras’ book, “Bibliometrics and Research Evaluation” he defines the characteristics of a good indicator as follows:

  • Adequacy of the indicator for the object that it measures
  • Sensitivity to the intrinsic inertia of the object being measured
  • Homogeneity of the dimensions of the indicator.

So, you might have an indicator such as ‘shoe size’, where folks with feet of a certain length get assigned a certain shoe size indicator. No problem there – it’s adequate (length of foot consistently maps on to shoe size); it’s sensitive to the thing it measures (foot grows, shoe size increases accordingly), and it’s homogenous (one characteristic – length, leads to one indicator – shoe size).  However, in research evaluation we struggle on all of these counts.  Because the thing we really want to measure, this elusive, multi-faceted “research quality” thing, doesn’t have any adequate, sensitive and homogeneous indicators. We need to measure the immeasurable. So we end up making false assumptions about the meanings of our indicators, and then make bad decisions based on those false assumptions.  In all of this, it is not the metric that’s at fault, it’s us.

In my view, the JIF is the biggest scapegoat of the Responsible Metrics agenda.  The JIF is just the average number of cites per paper for a journal over two years.  That’s it.  A simple calculation. And as an indicator of the communication effectiveness of a journal for collection development purposes (the reason it was introduced) it served us well.  It’s just been misused as an indicator of the quality of individual academics and individual papers.  It wasn’t designed for that.  This is misuse of a metric, not a bad metric. (Although recent work has suggested that it’s not that bad an indicator for the latter anyway, but that’s not my purpose here).  If the JIF is a bad metric, so is Elsevier’s CiteScore which is based on EXACTLY the same principle but uses a three-year time window not two, a slightly different set of document types and journals, and makes itself freely available.

If we’re not careful, I fear that in a hugely ironic turn, DORA and the Leiden Manifesto might themselves become bad (misused) metrics: an unreliable indicator of a commitment to the responsible use of metrics that may or may not be there in practice.

I understand why DORA trumpets the misuse of JIFs; it is rife and there are less imperfect tools for the job. But there are also other metrics that DORA doesn’t get in a flap about – like the individual h-index – which are subject to the same amount of misuse, but are actually more damaging.  The individual h-index disadvantages certain demographics more than others (women, early-career researchers, anyone with non-standard career lengths); at least the JIF mis-serves everyone equally.  And whilst we’re at it peer review can be an equally inadequate research evaluation tool (which, ironically, metrics have proven). So if we’re to be really fair we should be campaigning for responsible peer review with as much vigour as our calls for responsible metrics.

Bumper stickers by Paul van der Werf
Bumper stickers by Paul van der Werf (CC-BY)

 

It looks to me like we have moved from a state of ignorance about metrics, to a little knowledge.  Which, I hear, is a dangerous thing.  A little knowledge can lead to a bumper sticker culture ( “I HEART DORA” anyone?  “Ban the JIF”?) which could move us away from, rather than towards, the responsible use of metrics. These concepts are easy to grasp hold of, but they mask a far more complex and challenging set of research evaluation problems that lie beneath.  The responsible use of metrics is about more than the avoidance of certain indicators, or signing DORA, or even developing your own bespoke Responsible Metrics policy (as I’ve said before this is certainly easier said than done).

The responsible use of metrics requires responsible scientometricians.  People who understand that there is really no such thing as a bad metric, but it is very possible to misuse them. People with a deeper level of understanding about what we are trying to measure, what the systemic effects of this might be, what indicators are available, what their limitations are, where they are appropriate, how they can best triangulate them with peer review.  We have good guidance on this in the form of the Leiden Manifesto, the Metric Tide and DORA.  However, these are the starting points of often painful responsible metric journeys, not easy-ride bandwagons to be jumped on.  If we’re not careful, I fear that in a hugely ironic turn, DORA and the Leiden Manifesto might themselves become bad (misused) metrics: an unreliable indicator of a commitment to the responsible use of metrics that may or may not be there in practice.

Let’s get off the ‘metric-shaming’ bandwagons, deepen our understanding and press on with the hard work of responsible research evaluation.

 


Elizabeth Gadd

Elizabeth Gadd is the Research Policy Manager (Publications) at Loughborough University. She has a background in Libraries and Scholarly Communication research. She is the co-founder of the Lis-Bibliometrics Forum and is the ARMA Metrics Special Interest Group Champion

 

 

Creative Commons LicenceOriginal content posted on The Bibliomagician reposted here with permission. Content is licensed under a Creative Commons Attribution 4.0 International License.

REF 2021 workshops – what makes a 2*, 3* or 4* output?

 

We have a series of externally-facilitated REF outputs workshops scheduled to take place in early 2018 as part of the RKE Development Framework. Each session is led by REF 2014 sub-panel member who will explain how the panel interpreted and applied the REF 2014 guidance when assessing the quality of outputs. The workshops are open to all academic staff to attend.

The expected learning outcomes from the workshops are for attendees to:

  • Gain insight into how the REF panels applied the REF criteria when considering the significance, rigour and originality of outputs;
  • Understand the differences between outputs scored 4*, 3*, 2*, 1* and Unclassified;
  • Gain insight into what is meant by ‘world leading’ and ‘internationally excellent’;
  • Understand how scores borderline cases were agreed and what the tipping points were to either break the ceiling into the higher star level or to hold an output back a star level;
  • Understand how panels used other information such as metrics, markers of journal quality or prior knowledge in output assessment;
  • Gain insight into how future outputs could be strengthened for REF20

Workshops scheduled so far are:

  • UOA 2/3 – Prof Dame Jill Macleod Clark – 15 March 2018
  • UOA 11 – Prof Iain Stewart – 29 January 2018
  • UoA 14 – Prof Jon Sadler – 2 February 2018
  • UOA 15 – Prof Graeme Barker – 7 February 2018
  • UOA 17 – Prof Terry Williams – 17 January 2018
  • UOA 18 – TBC
  • UOA 23 – Prof Jane Seale – 26 January 2018
  • UOA 24 (Sport & Exercise Science) – Professor John Horne –date TBC
  • UoA 24 (Leisure and Tourism) – Prof Gareth Shaw – date TBC
  • UOA 11/32 (Computer Animation) – Prof Anthony Steed – 31 January 2018
  • UOA 32/34 (Practice-based) – Prof Stephen Partridge – date TBC
  • UOA 34 – Prof Peter Lunt – date TBC

Please see the full range of REF-related workshops on the staff intranet.

HE policy update for the w/e 8th December 2017

Fees and funding – the latest developments

The fees and funding discussions continue with some interesting developments this week.

Firstly, there was a written response by the Minister -responding to the Resolution of the House on 13th September 2017 on tuition fees (the non-binding one that was essentially passed unanimously because no Conservative MPs attended). The statement included a few important points and some hints:

  • Maximum grants and loans for living and other costs will be increased by forecast inflation (3.2%) in 2018/19.
  • For the first time, students starting part-time degree level courses from 1 August 2018 onwards will qualify for loans for living costs.
  • I expect to lay regulations implementing changes to student finance for undergraduates and postgraduates for 2018/19 early in 2018. These regulations will be subject to Parliamentary scrutiny.
  • The Department of Health will be making a separate announcement on changes to student finance for postgraduate healthcare students and dental hygiene and dental therapy students in 2018/19.

(more…)

Fake conferences are not fake news: beware predatory conferences

Introduction

Academic have been warned for a decade about predatory Open Access publishers (van Teijlingen 2014). These are commercial organisations charging academics a publication fee on submission of their manuscripts with a promise to publish their work quickly online. The problem is twofold: first, these commercial organisations don’t offer proper peer-review and editorial quality assurance; and secondly, academic are being tricked into believing the journal is a legitimate scientific publication.  The second author receives on average six to eight invitations a week to publish in this kind of predatory journals – see below for examples. The first author, who despite having not worked in an academic institution for over three years, still receives such invitations to publish in ‘Journal X’.

Predatory conferences

A similar phenomenon to predatory journals is the predatory conference (Moital 2014; Nobes 2017; Grove 2017). These are pretend academic conferences of questionable value, established first and foremost to make money, not for the greater good of the academic discipline.

Both authors have received bogus and legitimate invitations to attend conferences. A predicament with such an invitation, which 99% of time arrives by email, is that it is not easy to distinguish between fake and real offers. For example, the first author recently received an offer (at short notice), to attend a conference in Miami in November 2017 (see below). This was on the back of an editorial he had published couple of months earlier. For a career researcher going from contract to contract, the appeal of being invited to present a keynote at a conference can be flattering, far less an honour and a boost for one’s career. Therefore, while the idea that if it seems too good to be true, is a prudent one to hold; there is also a temptation to follow through.

The author replied to the request quizzing the reason for the invite out of the blue. The answer was less than convincing, and a swift email by the author saying “Don’t tell me… You are offering me a keynote with travel and accommodation… Lol!!” called their bluff and ended correspondence.

But digging a little deeper he found there was a webpage dedicated to taking payments to attend the conference. In the digital world, a fool can be easily and quickly separated from his or her money.

Of course, it may have been a real conference at a real venue, and they really wanted him to speak. But discerning this is not easy at first…

Some of the warning signs/What to look out for

  • The conference email invitation looks very convincing (if not don’t even read it!).
  • The venue is good location as Nobes (2017) highlighted, “the organizers are more interested in marketing the tourist destination rather than the academic value of the conference”.
  • The conference covers too many different aspects or topics, as if the advert is designed to catch the eye of many people as possible who are vaguely connected to the discipline.
  • Mentions on associated predatory journals and ‘important’ organisations in the discipline.
  • Email and bank accounts that don’t look professional/ official.
  • Little mention of attendance fees, but after acceptance emails demanding a high conference fee and other charges.
  • Conference organisers are not academics, or unknown names.
  • Conference does not peer-review submission/ not provide proper editorial control over presentations
  • Signs of copying of names of existing academic conferences or scientific organisation and even copying of their webpages
  • Even more advertising than normal at a scientific conference.

Furthermore, Andy Nobes (2017) offered some helpful advice on quality of the conference websites in the list below. Andy is based at AuthorAID, a global network providing support, mentoring, resources and training for researchers in developing countries.

Who is at risk of falling for predatory conferences?

Academics need to be aware of money-making conferences and meetings without a true commitment to science. But some academics might be more at risk than others. Young researchers, PhD students and fledgling academics, living from contract to contract may feel any conference attendance is a potential career boost. Thus, such an invitation might seem flattering and an opportunity to good to miss. A way to show that he or she is a capable and independent academic.

Final thoughts

Most academics go to conferences for a combination of presenting their work to get critical feedback, making new contacts, sharing ideas and to be inspired. With such broad combination of motivating factors, the exact purpose of conferences is difficult to ascertain because there is no a priori agreed role and value of conferences (Nicolson, 2017a). However, there is evidence that academic conferences function to facilitate commodity transactions, be that knowledge, tools, skills, reputations, or connections, which reflects the neoliberal ethos in the modern academy (Nicolson 2017b). The predatory conference can be viewed in this light, where academia is more and more focused on generating revenue. It is at best scurrilous, and worst, criminal, for organisations to make money using such a confidence trick.  Always check which conferences are organised and advertised by recognised scholarly organisations in your own discipline. If uncertain ask a more experienced academic, a senior colleague or mentor.

 

 

Donald J. Nicolson

(Health Services Researcher, NHS Fife, and Independent Scholar; twitter @_mopster )

Edwin R. van Teijlingen

(Centre Midwifery, Maternal & Perinatal Health)

 

References:

Moital, M. (2014) Ten Signs of a Bogus/Fake Conference.

Grove, J. (2017) Predatory conferences ‘now outnumber official scholarly events’  (26th Oct.)

Nicolson, D.J. (2017a) Do conference presentations impact beyond the conference venue? Journal of Research in Nursing. 22(5), pp.422-425.

Nicolson, D.J. (2017b) Academic Conferences as Neoliberal Commodities, Palgrave Macmillan

Nobes, A. (2017) What are ‘predatory’ conferences and how can I avoid them?

van Teijlingen, E. (2014) Beware of rogue journals.

 

REF 2021 workshops – what makes a 2*, 3* or 4* output?

We have a series of externally-facilitated REF outputs workshops scheduled to take place in early 2018 as part of the RKE Development Framework. Each session is led by REF 2014 sub-panel member who will explain how the panel interpreted and applied the REF 2014 guidance when assessing the quality of outputs. The workshops are open to all academic staff to attend.

The expected learning outcomes from the workshops are for attendees to:

  • Gain insight into how the REF panels applied the REF criteria when considering the significance, rigour and originality of outputs;
  • Understand the differences between outputs scored 4*, 3*, 2*, 1* and Unclassified;
  • Gain insight into what is meant by ‘world leading’ and ‘internationally excellent’;
  • Understand how scores borderline cases were agreed and what the tipping points were to either break the ceiling into the higher star level or to hold an output back a star level;
  • Understand how panels used other information such as metrics, markers of journal quality or prior knowledge in output assessment;
  • Gain insight into how future outputs could be strengthened for REF2021.

 

Workshops scheduled so far are:

  • UOA 2/3 – Prof Dame Jill Macleod Clark – 15 March 2018
  • UOA 4 – Prof Marion Hetherington – 10 January 2018
  • UOA 11 – Prof Iain Stewart – 29 January 2018
  • UOA 12 – Prof Chris Chatwin – 8 January 2018
  • UOA 14 – Prof Jon Sadler – 11 January 2018
  • UOA 15 – Prof Graeme Barker – 7 February 2018
  • UOA 17 – Prof Terry Williams – 17 January 2018
  • UOA 18 – tbc
  • UOA 20/21 – Prof Imogen Taylor – 15 January 2018
  • UOA 23 – Prof Jane Seale – 26 January 2018
  • UOA 24 – tbc
  • UOA 27 – Prof Pat Waugh – 16 January 2018
  • UOA 11/32 (computer animation) – Prof Anthony Steed – 31 January 2018
  • UOA 32/34 (practice-based) – Prof Stephen Partridge – date tbc
  • UOA 36 – Prof Peter Lunt – date tbc

Bookings for these can be made via the Staff Intranet: https://staffintranet.bournemouth.ac.uk/workingatbu/staffdevelopmentandengagement/fusiondevelopment/fusionprogrammesandevents/rkedevelopmentframework/researchexcellenceframework/

REF 2021 – final decisions published

HEFCE kept their word and published the final decisions on REF 2021 in the autumn. Having issued the initial decisions on the Research Excellence Framework 2021 in September, the final decisions were published this week. HEFCE released its further decisions on staff and outputs on 21 November 2017.  These decisions have been informed by responses to key questions relating to staff and outputs and a survey of staff in scope for submission. This blog post provides a summary of the key decisions.

 

Submitting staff:

Previous REF/RAE exercises asked institutions to select staff for submission. The Stern Review in 2016 recognised how divisive this practice was and instead recommended that all research-active staff be returned to the next REF. HEFCE are implementing this recommendation by expecting all staff with a ‘significant responsibility for research’ to be submitted, provided they are ‘independent researchers’. What do these terms mean in practice? The HEFCE definition is:

“those for whom explicit time and resources are made available to engage actively in independent research, and that is an expectation of their job role. Research is defined as a process of investigation leading to new insights, effectively shared. Staff engaged exclusively in scholarship would not be considered to have a significant responsibility for research.”

Working with the REF 2021 main panels, HEFCE will provide further guidance on identifying staff with significant responsibility. This will be published in the guidance on submissions and panel criteria. This guidance will not prescribe a fixed set of criteria that all staff would be required to meet, but will set out a ‘menu’ of what HEFCE would consider may be appropriate indicators of significant responsibility.

 

Recognising that there are staff who have more significant responsibility for other activities, HEFCE will implement an approach whereby institutions, working with their staff and with guidelines, identify who is in scope for submission among staff meeting core eligibility criteria. HEFCE has defined the core eligibility criteria as:

Category A eligible’

  • academic staff with a contract of employment of ≥0.2 FTE
  • on the payroll of the submitting institution on the census date (31 July 2020)
  • whose primary employment function is to undertake either ‘research only’ (independent researchers only) or ‘teaching and research’
  • have a substantive connection with the submitting institution (i.e. BU)
  • for staff on ‘research only’ contracts, the eligible pool should only include those who are independent researchers, not research assistants

‘Category A submitted’ describes the staff from the ‘Category A eligible’ pool who have been identified as having significant responsibility for research on the census date.

Where the ‘Category A eligible’ staff definition accurately identifies all staff in the submitting unit with significant responsibility for research, the unit should submit 100% of staff. Where it does not accurately identify all staff in the submitting unit who have significant responsibility for research, institutions will need to implement processes to determine this and document this in a code of practice, approved by the relevant funding body with advice from the Equality and Diversity Advisory Panel (EDAP).

 

Submitting outputs:

  • The average number of outputs required per submitted FTE will be 2.5 (up from 2 outputs as previously suggested by HEFCE).
  • A minimum of one output will be required for each staff member employed on the census date (as expected).
  • A maximum of five outputs may be attributed to individual staff members (including those who have left) (down from 6 outputs as previously suggested by HEFCE).
  • Data on the distribution of outputs across staff in the unit, including staff who have left, will be provided to the sub-panels for consideration in relation to the assessment of the environment.

Output portability: A transitional approach is being adopted whereby outputs may be submitted by both the institution employing the staff member on the census date and the originating institution where the staff member was previously employed when the output was demonstrably generated. ‘Demonstrably generated’ will be determined by the date when the output was first made publicly available. This applies to the whole REF 2021 period.

Open access: The REF Open Access policy will be implemented as previous set out. This requires outputs within the scope of the policy (journal articles and some conference proceedings) to be deposited as soon after the point of acceptance as possible, and no later than three months after this date from 1 April 2018. Due to concerns around deposit on acceptance, a deposit exemption will be introduced from 1 April 2018 and remain in place for the rest of the REF 2021 publication period. This will allow outputs unable to meet this deposit timescale to remain compliant if they are deposited up to three months after the date of publication.

 

Number of impact case studies required

Submissions will include a total of one case study, plus one further case study per up to 15 FTE staff submitted, for the first 105 FTE staff returned (with a reduced requirement above this of one additional case study per 50 FTE staff). Submissions will therefore include a minimum of two case studies.

 

For the latest information, see our REF 2021 webpage.

Have you been involved with an event designed for the external community?

Then we want to hear from you!smiley-face1

The University is currently compiling the data for the annual Higher Education – Business & Community Interaction survey (HE-BCI) due to be submitted to HESAshortly. Data returned is used to calculate our HEIF grant.

We are asked to submit details of social, cultural and community events designed for the external community (to include both free and chargeable events) which took place between1 August 2016 and 31 July 2017.

hesa_logo

Event types that should be returned include, but are not limited to:

  • public lectures
  • performance arts (dance, drama, music, etc)
  • exhibitions
  • museum education
  • events for schools and community groups
  • business breakfasts

We cannot return events such as open days, Student Union activity, commercial conferences, etc.

All events that we ran as part of the Festival of Learning, ESRC Festival of Social Science and Cafe Scientifique series are likely to be eligible for inclusion and we will collate this information on your behalf centrally.

If you have been involved with any other event which could be returned, please could you let your contact (see below) know the event name and date, whether it was free or chargeable, the estimated number of attendees, and an estimate of how much academic time was spent preparing for (but not delivering) the event:

  • SciTech – Norman Stock
  • FoM – Rob Hydon
  • HSS – Tanya Richardson
  • FMC – Mark Brocklehurst
  • Professional Service – Rebecca Edwards (RKEO)

The data returned is used by HEFCE to allocate the HEIF funding so it is important that we return as accurate a picture as possible.

REF 2021 workshops – what makes a 2*, 3* or 4* output?

We have a series of externally-facilitated REF outputs workshops scheduled to take place in early 2018 as part of the RKE Development Framework. Each session is led by REF 2014 sub-panel member who will explain how the panel interpreted and applied the REF 2014 guidance when assessing the quality of outputs. The workshops are open to all academic staff to attend.

The expected learning outcomes from the workshops are for attendees to:

  • Gain insight into how the REF panels applied the REF criteria when considering the significance, rigour and originality of outputs;
  • Understand the differences between outputs scored 4*, 3*, 2*, 1* and Unclassified;
  • Gain insight into what is meant by ‘world leading’ and ‘internationally excellent’;
  • Understand how scores borderline cases were agreed and what the tipping points were to either break the ceiling into the higher star level or to hold an output back a star level;
  • Understand how panels used other information such as metrics, markers of journal quality or prior knowledge in output assessment;
  • Gain insight into how future outputs could be strengthened for REF2021.

 

We’ve got dates for half of the UOAs so far:

  • UOA 2/3 – Prof Dame Jill Macleod Clark – date tbc (likely to be mid to late February 2018)
  • UOA 4 – Prof Marion Hetherington – 10 January 2018
  • UOA 11 – Prof Iain Stewart – 29 January 2018
  • UOA 12 – Prof Chris Chatwin – 8 January 2018
  • UOA 14 – Prof Jon Sadler – date tbc
  • UOA 15 – Prof Graeme Barker – date tbc
  • UOA 17 – Prof Terry Williams – 17 January 2018
  • UOA 18 – tbc
  • UOA 20/21 – Prof Imogen Taylor – 15 January 2018
  • UOA 23 – Prof Jane Seale – 26 January 2018
  • UOA 24 – tbc
  • UOA 27 – Prof Pat Waugh – 16 January 2018
  • UOA 32 – Prof Stephen Partridge – date tbc
  • UOA 36 – Prof Peter Lunt – date tbc

Bookings for these can be made via the Staff Intranet: https://staffintranet.bournemouth.ac.uk/workingatbu/staffdevelopmentandengagement/fusiondevelopment/fusionprogrammesandevents/rkedevelopmentframework/researchexcellenceframework/

REF2021 – initial decisions finally published

On Friday there was an exciting update from the REF Team based at HEFCE – they published the initial decisions on REF 2021. Whilst this does not include decisions regarding submitting staff, output portability or the eligibility of institutions to participate in the REF, it does include key decisions regarding the UOA structure, institution-level assessment, and the assessment weightings.

The decisions published on Friday are summarised below:

 

OVERALL:

Assessment weightings:

  • Outputs 60% (down from 65%)
  • Impact 25% (up from 20%)
  • Environment 15% (same but now includes impact strategy)

The move of the impact template from the impact assessment to the environment assessment means impact will actually contribute to more than 25% of the weighting (see impact section).

Assessment will continue to use the five-point REF 2014 scale (1*-4* and Unclassified).

UOA structure:

  • Total UOAs reduced from 36 to 34
  • Engineering will be a single UOA – UOA 12
  • REF 2014 UOA 17 will be restructured to form UOA 14: Geography and Environmental Studies and UOA 15: Archaeology
  • ‘Film and Screen Studies’ will be located and included in the name of UOA 33: Music, Drama, Dance, Performing Arts, Film and Screen Studies
  • HEFCE will continue consulting with the subject communities for forensic science and criminology to consider concerns raised about visibility. A decision is expected this autumn.

HESA cost centres will not be used to allocate staff to UOAs. Responsibility for mapping staff into UOAs will therefore remain with institutions.

 

TIMETABLE:

Impact:

  • Underpinning research must have been produced between 1 Jan 2000 – 31 Dec 2020.
  • Impacts must have occurred between 1 Aug 2013 – 31 Jul 2020.

Environment:

  • Environment data (such as income and doctoral completions) will be considered for the period 1 Aug 2013 – 31 Jul 2020.

Outputs:

  • The assessment period for the publication of outputs will be 1 Jan 2014 – 31 Dec 2020.

The draft REF 2021 guidance will be published in summer/autumn 2018 and the final guidance will be published in winter 2018-19. The submission will be in autumn 2020.

 

OUTPUTS:

Interdisciplinary research:

  • Each sub-panel will have at least one appointed member to oversee and participate in the assessment of interdisciplinary research submitted in that UOA.
  • There will be an interdisciplinary research identifier for outputs in the REF submission system (not mandatory).
  • There will be a discrete section in the environment template for the unit’s structures in support of interdisciplinary research.

Outputs due for publication after the submission date:

A reserve output may be submitted in these cases.

Assessment metrics:

Quantitative metrics may be used to inform output assessment. This will be determined by the sub-panels. Data will be provided by HEFCE.

 

IMPACT:

  • Impact will have a greater weighting in REF 2021 (25% overall plus impact included in the environment template and therefore weighting).
  • Harmonised definitions of academic and wider impact will be developed between HEFCE and the UK Research Councils.
  • Academic impacts will be assessed as part of the ‘significance’ assessment of the outputs and therefore not in the impact assessment.
  • Further guidance will be provided on the criteria for reach and significance and impacts arising from public engagement.
  • The guidance on submitting impacts on teaching will be widened to include impacts within, and beyond, the submitting institution.
  • Impacts remain eligible for submission by the institution in which the associated research was conducted. They must be underpinned by excellent research (at least REF 2*).
  • Impact case study template will have mandatory fields for recording standardised information, such as research funder, etc.
  • The number of case studies required – still not confirmed – HEFCE are exploring this in relation to the rules on staff submission and the number of outputs.
  • Case studies submitted to REF 2014 can be resubmitted to REF 2021, providing they meet the REF 2021 eligibility requirements.
  • The relationship between the underpinning research and impact will be broadened from individual outputs to include a wider body of work or research activity.

 Institutional-level assessment (impact case studies):

  • HEFCE will pilot this assessment in 2018 but it will not be included in REF 2021.

 

ENVIRONMENT:

The UOA-level environment template will be more structured, including the use of more quantitative data to evidence narrative content:

  • It will include explicit sections on the unit’s approach to:
    • supporting collaboration with organisations beyond HE
    • enabling impact – akin to the impact template in REF 2014
    • supporting equality and diversity
    • structures to support interdisciplinary research
    • open research, including the unit’s open access strategy and where this goes beyond the REF open access policy requirements

Institutional-level assessment (environment):

  • Institution-level information will be included in the UOA-level environment template, assessed by the relevant sub-panel.
  • HEFCE will pilot the standalone assessment of institution-level environment information as part of REF 2021, but this will not form part of the REF 2021 assessment. The outcomes will inform post-REF 2021 assessment exercises.

 

PANEL RECRUITMENT:

  • The sub-panel chair application process is now open (details available via the link).
  • The document sets out the plan for the recruitment of panel members (a multi-stage approach)

 

OUTSTANDING DECISIONS:

The announcement does not include decisions regarding submitting staff, output portability or the eligibility of institutions to participate in the REF. There is ongoing dialogue between HEFCE (on behalf of the funding councils) and the sector regarding this. The letter (accessed via the link above) sets out HEFCE’s current thoughts on these points and invites the sector to feedback by 29 September 2017.  BU will be providing feedback so if you have a view on this then please email me (jnortham@bournemouth.ac.uk).

 

SUMMARIES AVAILABLE:

REF & TEF: the connections – 11th October 2017

The outcomes of this year’s Teaching Excellence Framework (TEF) and the direction for the Research Excellence Framework (REF) as set out in the 2017 consultation response are likely to have significant implications for the higher education sector.  The links between research and teaching are likely to become ever more important, but set against the context of increasing emphasis on student experience, how should the sector respond and where should it focus?

REF & TEF: the connections will be hosted at Bournemouth University and will bring together some of the leading experts in higher education in both research and teaching policy.  During the morning, attendees will have the opportunity hear from experts from across the higher education sector, as they share their insights into the importance of the links between teaching and research.  The afternoon will feature a number of case studies with speakers from universities with a particularly good record of linking research and  teaching.

Speakers confirmed to date include Kim Hackett, REF Manager and Head of Research Assessment, HEFCE and John Vinney Bournemouth University, William Locke University College London, Professor Sally Brown Higher Education Academy.

For more information or to book on visit: https://reftef.eventbrite.co.uk

I’m an academic at BU. Will I be submitted to REF 2021?

Good question and, although no firm decisions have yet been announced by HEFCE, it is looking increasingly likely that all academic staff at BU will be included in the REF 2021 submission, each with at least one output published between 2014-2020.

In the midst of the sector waiting with baited breath for the initial decisions from the UK funding bodies on this, and other REF questions, HEFCE held a webinar in July. During this webinar they shared some possible decisions with the sector (the webinar and the slides are available here on the HEFCE website). The key suggestions were:

  • 100% of academics with a “significant responsibility” to undertake research are likely to be included. It is unclear at this stage what “significant responsibility” means in practice, although it is anticipated this will be based on there being an expectation for an academic member of staff to undertake research.
  • Staff without a significant responsibility for research may be exempt from inclusion but auditable documentation would be required. This would need to explicitly evidence there is not an expectation of the individual to undertake research (examples given were workload models or career frameworks linked to the individual).
  • Everyone submitted is likely to need a minimum of 1 output. The average and maximum outputs per FTE are to be determined – in the consultation it was proposed these were an average of 2 outputs per submitted FTE and a maximum of six outputs per person.
  • There is likely to be a hybrid model for output portability (i.e. which HEI can submit the outputs authored by a member of academic staff who moves from one institution to another during the REF period) – HEFCE proposed two options:
    • Simple model whereby both old and new institutions can submit the outputs produced by the academic member of staff when he/she was employed at the old institution (this would, some might say unfortunately, result in double counting of outputs but this can probably be tolerated as it happens already in some cases, for example, where co-authors at different HEIs submit the same output).
    • Complex model whereby a census date and employment range date are used to determine which outputs can be submitted by which institution.

Whilst these are not yet firm decisions (these are expected in two communications – one on staff and outputs in the autumn and one on everything else later this month), these are the clearest indications yet that all academic staff at BU will be included in REF 2021, each with at least one output.

For further information on REF 2021, see BU’s REF 2021 webpage. If you have any queries, please contact Julie Northam or Shelly Anne Stringer.

REF Main Panel Chairs announced

The main panels will provide leadership and guidance to the sub-panels that undertake the REF assessment. As chairs designate, the appointees will at first advise the funding bodies on the initial decisions and on the further development of the framework. They will take up their roles as chairs later in the year*, once the outcomes of the ‘consultation on the second REF’ are announced and further appointments to the REF panels have been made.

The Main Panel Chairs (designate) for each of the four main panel areas are:

Biographies for the Main Panel Chairs are available here: Biographies

*Interesting to note that HEFCE have reaffirmed their previous commitment to announce the outcomes of the consultation later this year, despite rumours this would either be delayed or result in a second technical consultation.