Skip to Main Content
AUS Library Homepage
University Library

Tracking Research Impact

Responsible Metrics

"Some of the most precious qualities of academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and plurality of our research." - responsiblemetrics.org/about/

Metrics can be a useful tool to help track the attention received by research outputs. Citations and online attention are relatively easy to record and measure, and provide a reasonably quick and simple way to compare research.

However, metrics on their own are not sufficient to assess research fairly. Research can have impact in a number of ways, many of which are difficult to measure or quantify.

A controversial or fraudulent paper might receive a high amount of negative citations. Albert Einstein's h-index is much lower than many contemporary researchers. Metrics can also reflect bias within the scholarly community - for example female researchers receive fewer citations on average than men.

Quantitative measures have a place and a value in research assessment, but it's important to use many measures to provide a more rounded story of research impact. Traditional (citation) metrics, and alternative metrics only tell part of the story.

How to choose metrics responsibly

Metrics can seem simple, but it's easy to take them out of context, or use them incorrectly. It's important to know what questions you are asking, so you can select the best metrics (and complementary sources) to help you answer those questions fairly. 

Have three types of input: peer review, expert opinion, and information from a quantitative evidence-base.

When these complementary approaches ‘triangulate’ to give similar messages, you can have confidence that your decision is robust. Conflicting messages are a useful alert that further investigation is probably required.

In the same way that more than one peer review is usually requested, using multiple metrics will also ensure that any findings are as reliable as possible.

Golden rules

  • What question are you trying to answer? Is the metric you are using appropriate? What aspect of research performance do you want to explore? Why? Can this be measured, and if so how? Find out what each metric can tell you, and what it can't. If you’re using a metric as a proxy for a something that is not directly measurable, as a minimum you should be explicit about this in your analyses, and you should also consider not using it at all.
  • Always use quantitative metric-based input alongside qualitative opinion-based input. Like all statistics, metrics can be misleading without context. Metrics can be a useful tool, but they are no replacement for expert opinion.
  • Get the big picture. Each metrics tool takes its data from different sources, and calculates its metrics in different ways. Ensure that the quantitative, metrics part of your assessment always relies on at least two metrics to reduce bias. Using only a single measure may also encourage people to change their behavior to game that particular measure.

Be aware of factors that affect metric value

There are six factors, besides performance, that may affect the value of a metric:

  • Size (i.e. size of institution, department)
  • Discipline (i.e. Sciences have higher citation rates than Humanities)
  • Publication-type (i.e. journal articles generally receive more trackable citations than books)
  • Database coverage (i.e. some databases cover more than others - this will affect citations you can track)
  • Manipulation (i.e. some metrics include self-citations)
  • Time (i.e. stage of a researcher's career, citations take time to accrue)

Good Practices

What is DORA?

The full agreement can be found here.

Two key principles of DORA include:

  • being explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
  • for the purposes of research assessment, considering the value and impact of all research outputs (including datasets and software) in addition to research publications, and considering a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

What is The Leiden Manifesto?

Read The Leiden Manifesto here. 

The Leiden Manifesto is a set of practical and action-oriented recommendations for those engaged in the evaluation of research, whether in the role of evaluator, those being evaluated, or those responsible for designing and delivering research metrics and indicators. 

"The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation." 

- Hicks, D., Wouters, P., Waltman, L., de Rijcke, S, & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metricsNature 520(7548), 429-431. doi:10.1038/520429a