top of page

Thing 6 - Bibliometrics and Impact

Updated: May 4, 2023

Dr Alex Pavey, Doctoral Student Development Manager, Centre for Doctoral Studies, King's College London & Shiobhan Smith,Associate University Librarian Customer Experience, University of Otago


Part 1: Introduction and Context

Bibliometric methods are widely used to assess research based on quantitative measures. Bibliometrics is one way to demonstrate the influence and impact of research based on measures such as citation counts and h-index.


Author metrics

Author-level metrics attempt to quantify the productivity and impact of individual researchers by aggregating data related to their publications. They can be useful for giving you an indication of the overall academic impact you have made with your publications as your career develops. The most straightforward author-level metric is the citation count – the total number of citations received by all publications attributed to you in a given database.


The h-index is probably the most well-known author metric within academia. It provides an indication of an author’s impact based on the citation rates of their publications – the higher the h-index, the greater the academic impact achieved. The most important thing to remember about the h-index is that it is not normalised to account for differences between publication practices in different disciplines. A reasonably successful, but not exceptional, early-career researcher working in a field where high volumes of journal publications are the norm would likely have a significantly higher h-index than a research ‘superstar’ in parts of the arts or humanities. As a result, the h-index and any other non-normalised metric should never be used to make comparisons between researchers or outputs in different disciplines.


An image of a water drop hitting a pool of water
We all hope to a big splash with our research (Photo by Robert Anderson on Unsplash)

Journal metrics

Journal-level metrics are best used to assess the impact of a specific journal, or to facilitate comparison between journals. The most well-known of these are the Journal Impact Factor (JIF), Scimago Journal Rank (SJR) and Citescore. These types of metrics can be useful for informing your choice about where to submit a paper – but journal metrics shouldn’t be used as a proxy for judging the quality of research in individual articles.


Article metrics

Article metrics can help you better understand the reach and impact of your research: from the number of downloads your paper received in the first month it was available online and the types of discussions it prompted on Twitter, to the citations it accrued in the following two years and where your international academic audience is located.


Altmetrics

Traditional measures of research outputs focus on citations and don’t provide the full picture of how research is distributed and discussed. New ways of measuring attention and engagement with research have been developed. This includes social media discussion, blog posts and comments, online news media coverage, and Wikipedia articles, as well as online page views of articles and downloads of full-text PDFs. Several providers, including ImpactStory, Plum Analytics and altmetric.com, aggregate altmetrics from these diverse sources to provide a richer view of research impact.


As a data source, altmetrics can be used to assess author and journal impact, but they are most closely associated with article-level metrics, since it is at the article level that altmetrics and traditional measures of citation impact can be combined most powerfully.


Ethical Metrics

The importance of using metrics responsibly is now widely recognised. Responsible metrics should be based on a number of principles that recognise the diverse qualities and impacts of research. 


The two most well-known attempts to formalise best practice in the use of research metrics are the San Francisco Declaration on Research Assessment, or DORA, and the Leiden Manifesto for Research Metrics (Nature, 23 April 2015).


Task

Watch the following 3 minute video by the Office of Scholarly Communication, Cambridge. It summarised the main principles of responsible metrics in relation to measuring research impact.


Part 2: Key platforms/tools


Key Platforms and Software

Bibliometrics relies upon indexes of citations, drawn from academic journals and other publications, and collected in citation databases. There are two major databases that index academic abstracts and citations: Web of Science and Scopus, (both use institutional logins). There are also newer platforms that can be used as data sources: Google Scholar and Dimensions. Each of these data worlds is made up of different sets of publications, with different standards of indexing and categorisation. This means that the bibliometric results derived from one world will not be the same as results derived from the others – something that is always worth bearing in mind. If you find yourself making comparisons, make sure you’re comparing like with like!


Each platform has different strengths and weaknesses in terms of their data, interfaces and capabilities. If you are interested in exploring bibliometrics further, these differences are worth considering. We have summarised some key differences in the table below. You can download the table as a PDF file.

The first part of the table comparing metrics platforms and software. The PDF version of the table can be downloaded from the blog post.

The second part of the table comparing metrics platforms and software. The PDF version of the table can be downloaded from the blog post.

Part 3: What can you do

In the past, bibliometric methods were commonly used to assist libraries with collection management. Today, bibliometrics or bibliometric methods are predominantly used in quantitative research assessment exercises, in which the impact of a research field, a researcher or group of researchers, a particular journal article or journal are explored and sometimes compared.


Here are some ways you can immediately incorporate bibliometric methods and data to help your research.


1. Discover new publications for your literature or systematic review

Bibliometric data can help you discover new literature by connecting a publication to all the publications that cite it in the future. Often bibliometric tools also analyze reference lists to determine relatedness between papers. Many researchers follow these lists to discover new literature.


Task

Return to Google Scholar and search again for one of the publications from the previous task or a paper you already have used in your literature/systematic review.

  • Click on the “Cited by…” link for the publication and explore the list of publications that have cited the publication you searched.

  • Is there a way to complete further searching within just the citing articles?


2. Analyze the publications in your literature or systematic review

Bibliometric data can be used to identify seminal texts, prominent authors, important keywords, and key journals. A simple example is noticing an older publication with an interesting idea and very few citations. Could that indicate a possible research path yet to be explored? There are several tools designed to map, analyse, and visualise networks and clusters, based on bibliometric data. Some prominent examples include VosViewer, CitNetExplorer, CiteSpace, Inciteful, and Connected Papers. A visual overview of an academic field helps you locate important papers and quickly see connections between authors, journals, institutions, and ideas.


Task

  • Search for a paper you already use or select one of their example graphs.

  • Identify Prior works and Derivative works.

  • Can you quickly build new graphs from any of the Prior or Derivative works?

A man with sun glasses and striped jacket, holding a magnifying glass
You might need to scrutinise some sources more than others (Photo by Marten Newhall on Unsplash)

3. Evaluation of publications

As well as helping researchers identify new literature, bibliometric data can also support the evaluation of literature. For example researchers can consider the number of citations a publication has, the venue it is published in, its connection with other important papers and so forth, to determine if the publication has been impactful, at least in a scholarly sense. Tools such as Semantic Scholar and Scite.ai are using machine learning to contextualise citation data to support evaluation.


Task

Explore Scite.ai:

  • Search for a paper you already use or select the example report.

  • Does the paper have any supporting citations?

  • Does the paper have any disputing citations?

  • What happens if you select “Visualise Report”?


4. Evaluation of journals.

Researchers can maximise the impact of their research by taking a strategic approach to where and how they publish their research findings. Traditionally, academic journals were only available by subscription paid by an institution’s library. In recent years, open access and hybrid models that usually require Article Processing Charges (APCs) have become available. Open access publishing makes research outputs freely available online to read, download, and use without the licensing restrictions usually in place on published works. There are bibliographic tools to help identify and evaluate potential sources whether they have a subscription, open access, or hybrid publishing model.


Task

Watch “Publishing and journal rankings” from the University of Technology Sydney Library that covers the basics of journal publishing metrics including Journal Citation Reports, SciMago, and Google Scholar. 


Comparisons and Limitations

Citation practices vary from field to field, older papers have had more time to attract citations than newer papers, and some document types, for example, reviews, are cited a lot more than other types of academic work. For comparison purposes, even if you have 50 citations and a colleague has only 30, you don't necessarily have the higher citation impact. The only way to compare fairly is to take into account differences in subject area, publication year, and document type.


Even some of the more rigorous, normalised author-level metrics can fail to do justice to the potential impact of postgraduate and early-career researchers. This is because most of these metrics are based upon volume – the number of publications that an individual has authored, and the number of citations they have received. They are fundamentally most reliable and informative when these numbers are higher. Established researchers may have several productive decades behind them, and dozens or hundreds of publications that have accrued citations throughout that time. Early-career researchers, by definition, will not.


Given the above factors, it is very important for early-career researchers to be aware that there are limits to how much they should be concerned about their individual citation counts and metrics. Recognise the role that metrics play within institutions, but do not use them as an excuse to be self-critical or to contrast your achievements negatively with your peers. Treat them as a tool and think about how you can make them work for you. But first and foremost, as always, undertake innovative research that you are passionate about.


Task

Leaving the best for last, explore the Metrics Toolkit. This resource is designed to provide: “evidence-based information about research metrics across disciplines, including how each metric is calculated, where you can find it, and how each should (and should not) be applied. You’ll also find examples of how to use metrics in grant applications, CVs, and promotion dossiers”. Bookmark this resource so you can find it later.


Discussion for your pod

This week you have had a lot to look at in the world of publishing and impact. What did you think? Are you published? Are those publications accruing impact? How do you feel about the new altmetrics - are they a fad or here to stay? How do you feel about impact in research as a general concept?


Further Reading

Pavey, A, ‘How can you demonstrate the impact of your publications?’ in Publishing for Impact, Duke, D, Denicolo, P and Henslee, E, London : SAGE Publications (2020).


Acknowledgement

Some of the wording and tasks in this post have been taken from the IATUL Research Impact Things, a self-paced training resource for Librarians (CC-BY-SA)



Two happy-looking dogs. Photo from Unsplash
The H in H-index stands for Hound, right?




299 views7 comments

Recent Posts

See All
bottom of page