Tagged / metrics

REF update: HEFCE’s REFlections event, 25 March 2015

I went to HEFCE’s (rather cleverly named) REFlections event on Wednesday to hear about the review of REF 2014 and plans for the future of research assessment.

The key points were:

  • Collaboration and multi-/interdisciplinary research are likely to be important for the next REF
  • HEFCE have commissioned Elsevier to undertake a project on measuring multidisciplinary research to inform the next REF
  • The REF impact case studies database went live yesterday and is an excellent resource
  • Dual support system is likely to stay
  • Impact case studies are likely to stay, however, the impact template may change/become obsolete
  • Peer review will stay, informed by metrics in some disciplines (akin to REF 2014)
  • Metrics are not yet robust enough to have a metrics-driven REF. In particular, this is not yet possible for the assessment of outputs or impact. It is possible, however, to rely more heavily on metrics for the environment assessment and there could be changes to this part for the next REF.

 

HEFCE plan to consult with the sector on future plans for the REF this coming autumn.

 

Further information:

Should metrics be used more widely in the next REF?

Back in 2008, after the dust was settling from the REF 2008 submission, HEFCE initiated a series of exercises to investigate whether bibliometric indicators of research quality (such as citation counts) could be used as part of the assessment for REF 2014. BU was one of 22 institutions that took part in the bibliometrics pilot, the result of which was that HEFCE concluded that citation information was not sufficiently robust enough to be used formulaically or as a primary indicator of quality but that there might be scope for it to inform and enhance processes of expert review in some disciplines. The REF 2014 guidelines stated that citation data would be provided for outputs submitted to all sub-panels in Main Panel A and some sub-panels in Main Panel B.

In April 2014, the Minister for Universities and Science asked HEFCE to undertake a fresh review of the role of metrics in determining quality, impact and other key characteristics of research undertaken in the HE sector. The review is being chaired by Professor James Wilsdon, Professor of Science and Democracy at the Science Policy Institute, University of Sussex.

HEFCE have launched a sector-wide call for evidence about research metrics and BU will be making an institutional response. BU colleagues are therefore invited to send feedback to me so that it can be considered as part of BU’s response. Colleagues are also invited to send individual responses to HEFCE.

Thinking back to 2008-09, I remember research metrics being an emotive subject and many researchers, both at BU and  across the sector, were extremely skeptical of their use in research assessment. Although bibliometrics have moved on a long way since then I think that there will still be concern as to whether metrics are robust enough to be used formulaically, particularly in the arts, humanities and social sciences.

HEFCE have asked that responses focus on the following issues:

1. Identifying useful metrics for research assessment.

2. How should metrics be used in research assessment?

3. ‘Gaming’ and strategic use of metrics.

4. International perspective.

Further information about the call for evidence is available here: http://www.hefce.ac.uk/whatwedo/rsrch/howfundr/metrics/

It is anticipated that the outcome of the review will inform the framework for the next REF assessment so it is vitally important that HEFCE receive a high quality and quantity of feedback from all disciplines.

If you would like to contribute to the BU institutional response, please add your comments to this response form and email it to me (jnortham@bournemouth.ac.uk) by Friday 30th May.

ResearcherID – an online tool for building a publications profile

What is ResearcherID?

ResearcherID is a facility provided via the Thomson Reuters ISI Web of Knowledge database. You can register for a unique researcher identification number which enables you to link directly to your published outputs and avoid the common problem of author misidentification. Users can update their profile information, build their publication list by using the Thomson Reuters Web of Science search facilities or by uploading files, and opt to make their profile public or private. Registered as well as non-registered users can search the ResearcherID registry to view profiles and find potential collaborators. You can also now search ResearcherID from within the Web of Science.

A ResearcherID number is a unique identifier that consists of alphanumeric characters. Each number contains the year you registered.

The benefits of registering for ResearcherID

The main benefit is the ability to build your own custom profile and correctly identify you as the author of your publications. Once registered, you will be provided with the tools to build your publication list by searching ISI Web of Knowledge and Web of Science, or by uploading your own list. You can choose whether or not to make your profile public.

ResearcherID is also integrated with EndNote Web, so you can build your online publication list either by searching Web of Science or uploading RIS files, or by using the EndNote Web online search. You can also manage your publication list via EndNote Web. For a tutorial on using EndNote Web, click here.

Once you’ve put together your publications list, you can then generate the following citation metrics for items available in the Web of Science:

  • H-index
  • Citation distribution per year
  • Total Times Cited count
  • Average Times Cited count

These metrics will be automatically updated in ResearcherID from the Web of Science as new data is added to the Web of Science.

ResearcherID can also help you find potential research collaborators by enabling you to search the ResearcherID registry. You can also explore an interactive map that can help you locate researchers by country and topic, or you can use the new country ‘tag cloud’.

Registering for ResearcherID

Go to the ResearcherID homepage then click on ‘Register’ and complete the short online form. You will be sent an email with a link to ResearcherID and a more detailed form appears for you to complete. Once you have completed this, and accepted the terms and conditions, you will receive your unique ResearcherID. Note that you will be sent these details in an email.

When you log in using your ResearcherID and password, you will be taken to your own publications web page with a unique URL. You can include this link on your email signatures so that others can easily access your publications.

Managing your publications

Under ‘MyPublications’ you will have an option to ‘Add’. Alternatively there is an ‘Add Publications’ button on the right hand side of the screen that will take you to the same location. This will give you the three options of searching for or adding a file:

  • ISI Web of Science
  • EndNote Web
  • RIS text file

Clicking on the publications database will bring up a search screen. Ensure that your surname and initials are correctly entered and click search. Any publications that are already included on your ResearcherID web page will appear in the list and be ticked already. If you wish to add others, tick the relevant box and click ‘Add selections to MyPublications’. The process works the same for all databases. Please note that you need to have an EndNote Web account.

To delete publications, simply click on the ‘Manage’ button under MyPublications.

For a ResearcherID factsheet, click here.

Bibliometrics need not be baffling!

What are bibliometrics?

Bibliometrics are a set of methods used to study or measure text and information. Citation analysis and content analysis are the most commonly used bibliometric methods. Bibliometric methods can help you explore the academic impact of a field, a set of researchers or a particular journal paper.

What is citation analysis?

Citation analysis looks at where a document has been referenced by others since it was originally published – this information can be used when searching for materials and in analysing their merit. Undertaking citation analysis on yourself is useful for assessing your own research performance. Specialist databases such as Web of Science and Scopus provide various tools for doing this analysis.

Searching for citation information on the Web of ScienceSM

Web of ScienceSM is hosted by Thomson Reuters and consists of various databases containing information gathered from thousands of scholarly journals, books, book series, reports, conference proceedings, and more:

  • Science Citation Index Expanded (SCI-Expanded)
  • Social Sciences Citation Index (SSCI)
  • Arts & Humanities Citation Index (A&HCI)
  • Index Chemicus (IC)
  • Current Chemical Reactions (CCR-Expanded)
  • Book Citations Index – coming soon!

These databases enable you to perform a variety of tasks, such as search published literature, undertake citation analysis, track new research within a particular field, and identify chemical compounds and reactions. Data is available from around 1990, and even earlier in some cases.

By producing a Web of ScienceSM Citation Report for yourself (or for others), you can find out who is citing your work and how it is being used in other people’s publications so that you can get a feel for the overall citation activity around your outputs. Search for an author’s name and then click on ‘Create Citation Report’ from the results page.

Producing this report will give you information such as the number of items published in each year, the number of citations to those items for each year, the average number of citations per item, and your h-index based on this information. Click here for web tutorials on how to use the Web of ScienceSM.

Searching for citation information on Scopus

Scopus, part of Elsevier’s SciVerse facility, was launched in November 2004 and is an abstract and citation database containing around 19,500 titles from more than 5,000 publishers. Scopus enables researchers to track, analyse and visualise research, and has broad coverage of the scientific, technical, medical and social sciences fields and, more recently, the arts and humanities. Data is currently largely available from 1996 but it does go back further than this in some cases. For more information about Scopus, click here.

By searching for yourself (or others) on the Scopus database using the author search facility, you can use the ‘View Citation Overview’ function to get a feel for the citations activity around your outputs. The information is presented and can be analysed in a number of ways, including pie charts, graphs and tables, and shows the breakdown of citation activity over a number of years and your h-index based on this data. Various tutorials on using Scopus can be accessed here.

Scopus and the Research Excellence Framework (REF): HEFCE has announced that Elsevier have been chosen as the provider of citation data services to the REF sub-panels that have chosen to make use of citation information as part of the assessment process. Using the Scopus database, HEFCE will provide the relevant sub-panels with raw citation data (i.e. not normalised) accompanied by contextual information, which will assist those panel members in making decisions about the outputs part of the REF submissions.

What is the h-index?

The h-index was conceived by Professor Jorge Hirsch in 2005 within the field of physics and is fast becoming one of the most widely used metrics for research evaluation. It is also becoming increasingly used as a measure of research activity and academic prominence across various subject areas.

The benefit of the h-index over other citation measures is that it is not influenced by a few highly cited papers and it ignores any papers that remain uncited. It is calculated based on the number of papers by a particular author that receive h or more citations. Therefore, an h-index of 15 means that a person has at least 15 papers that have been cited 15 times or more. Fortunately, the Web of Science and Scopus both automatically calculate the h-index as part of their citation analysis functions so there is no need to work it out manually.

If you’d like to know more about the h-index, the original research document can be accessed from the Cornell University Library webpage.

What are journal impact factors?

Journal Impact Factors are published annually on the Web of Knowledge and provide a way of ranking journals based on the citation performance of articles published by those journals from the previous two years. For more information about how impact factors are calculated and how they can be used, see my previous blog post.

Other methods of ranking journals also exist, such as the ABS Academic Journal Quality Guide and the ERA journal ranking list. Journal rankings can be useful when choosing which journal to publish in, for example.

Latest journal impact factors

Following the release of the latest Journal Citation Reports® on the Thomson Reuters ISI Web of Science database, we have compiled a list of the top ranking journals in various fields related to BU research. BU staff can access these lists by going to the designated folder on the collaborative I-drive: I:\R&KEO\Public\RDU\Journal Impact Factors 2011. If there are any additional subject areas that you would like to see included, do send me an email.

Related blog posts that may be of interest:
Journal impact factors explained

Google Scholar’s new citations tracking tool

As the demand for information about the performance of research outputs increases, Google Scholar’s latest online citations tracking tool could prove to be a useful resource. Google Scholar Citations is currently being launched with a small number of users before widespread dissemination, but the early signs are that it could provide a simple way for academics to calculate key metrics and monitor them over time.

According to Google Scholar, the tool will ‘use a statistical model based on author names, bibliographic data, and article content to group articles likely written by the same author’. You can then identify your articles using these groups and Google Scholar will then automatically produce the citation metrics. Three metrics will be available: the h-index, the i-10 index (which is the number of articles with at least ten citations), and the total number of citations to your articles. Each of these metrics will be available for all outputs as well as for articles published in the last five years, and will be automatically updated as new citations are available on the web.

You will be able to update your ‘profile’ of outputs by adding missing items, merging duplicates or correcting bibliographic errors. You’ll also have the choice to make your profile public, the idea being that you’ll make it easier for colleagues globally to follow your research because your outputs will appear in Google Scholar search results.

To get a glimpse of Google Scholar Citations in action, there are some sample profiles available via the Google Scholar blog. You can also register your email address so that you will be notified when the tool is finally available to everyone.

Journal Impact Factors Explained

There is often some confusion around Journal Impact Factors in terms of where they come from, how they’re calculated and what they mean. Hopefully the following will provide a brief explanation.


What are Journal Impact Factors?
Journal Impact Factors are just one of a number of journal analytical measures that form part of an online resource provided by Thomson Reuters on their Web of Knowledge called Journal Citation Reports® (JCR), which covers journals in the sciences, technology and social sciences. JCR provides a facility for the evaluation and comparison of journals across fields within the subject areas covered.

Other publications databases may provide their own tools for bibliometric or citation analysis (such as Elsevier’s Scopus) but Journal Impact Factors are only found on the Web of Knowledge.

A Journal Impact Factor is the average number of times that articles from a particular journal published in the past two years have been cited in the JCR year.

How are Journal Impact Factors calculated?
Journal Impact Factors are calculated by dividing the number of citations to articles published by a particular journal in the JCR year by the total number of articles published in the two previous years. For example, an Impact Factor of 2.5 means that, on average, the articles published in that journal up to two years ago have been cited two and a half times. Citing articles may be from the same journal although most citing articles are from different journals.

The number of articles given for journals listed in JCR primarily include original research and review articles. Editorials, letters, news items and meeting abstracts are usually not included in article counts because they are not generally cited. Journals published in non-English languages or using non-Roman alphabets may be less accessible to researchers worldwide, which can influence their citation patterns.

How are Journal Impact Factors used?
Journal Impact Factors can help in understanding how many citations journals have received over a particular period – it is possible to see trends over time and across subject areas, and they may help when you’re deciding where to publish an academic paper. However, as with all statistics, Journal Impact Factors should be used with caution and should ideally be combined with other metrics depending on how they’re being applied.

Equally, a journal’s Impact Factor is not necessarily a direct indicator of the quality of an individual paper published in that journal. Some published articles never receive any citations, for various reasons, even if they appear in a high impact factored journal.

Journal Impact Factors and the REF
Some of the assessment panels will be provided with citation metrics as part of HEFCE’s Research Excellence Framework (REF) in some subject areas, which will help inform the panel members’ judgements. However, journal impact factors or equivalent journal ranking systems (e.g. the ABS list) will NOT be used at all within the assessment process.