"Some of the most precious qualities of academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and plurality of our research." - responsiblemetrics.org/about/
Metrics can be a useful tool to help track the attention received by research outputs. Citations and online attention are relatively easy to record and measure, and provide a reasonably quick and simple way to compare research.
However, metrics on their own are not sufficient to assess research fairly. Research can have impact in a number of ways, many of which are difficult to measure or quantify.
A controversial or fraudulent paper might receive a high amount of negative citations. Albert Einstein's h-index is much lower than many contemporary researchers. Metrics can also reflect bias within the scholarly community - for example female researchers receive fewer citations on average than men.
Quantitative measures have a place and a value in research assessment, but it's important to use many measures to provide a more rounded story of research impact. Traditional (citation) metrics, and alternative metrics only tell part of the story.
Metrics can seem simple, but it's easy to take them out of context, or use them incorrectly. It's important to know what questions you are asking, so you can select the best metrics (and complementary sources) to help you answer those questions fairly.
Have three types of input: peer review, expert opinion, and information from a quantitative evidence-base.
When these complementary approaches ‘triangulate’ to give similar messages, you can have confidence that your decision is robust. Conflicting messages are a useful alert that further investigation is probably required.
In the same way that more than one peer review is usually requested, using multiple metrics will also ensure that any findings are as reliable as possible.
Golden rules
Be aware of factors that affect metric value
There are six factors, besides performance, that may affect the value of a metric:
What is DORA?
The full agreement can be found here.
Two key principles of DORA include:
What is The Leiden Manifesto?
Read The Leiden Manifesto here.
The Leiden Manifesto is a set of practical and action-oriented recommendations for those engaged in the evaluation of research, whether in the role of evaluator, those being evaluated, or those responsible for designing and delivering research metrics and indicators.
"The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation."
- Hicks, D., Wouters, P., Waltman, L., de Rijcke, S, & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature 520(7548), 429-431. doi:10.1038/520429a