The h-index (and What It Can't Tell You)
Published on May 06, 2026
Written by Joanne Paterson and Kristin Hoffmann

What researchers should know before putting stock in this metric
If you've applied for a faculty position, a grant, or a research award lately, you've probably encountered the h-index. It's become one of the most commonly used ways to summarize a researcher's output: a single number that reflects both how much you've published and how often that work has been cited.
The concept is straightforward. An h-index of 6 means you have at least six papers that have each been cited at least six times. It's easy to calculate and readily available through Google Scholar, Scopus, and Web of Science—which helps explain its appeal. Rather than hinging everything on one high-performing paper, it gives a general picture of consistent output over time.
But that simplicity can also be misleading.
The origin of the h-index
The h-index was proposed in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a way to assess theoretical physicists' relative standing. He didn't expect it to travel far. Today it's one of the most widely used metrics in academia, applied across disciplines, career stages, and evaluation contexts its creator never anticipated.
Career stage matters—and the h-index ignores it
Because the h-index accumulates over time, senior researchers have a structural advantage. More years of publishing means more publications and more opportunities for citations to build up. For early-career researchers, a lower h-index is often simply a reflection of time in the field, not the quality or influence of their work. Using the h-index to compare researchers at different career stages isn't just imprecise. It's misleading.
Discipline shapes the number
Citation and publication norms vary enormously across fields. In some disciplines, papers routinely have dozens of co-authors, and reference lists run long; in others, single-authored monographs are the norm, and citation counts are naturally lower.
Consider engineering and the humanities. In engineering, findings move quickly through large collaborative networks, accumulating citations at a pace that reflects the field's structure as much as any individual paper's merit. A humanities scholar, by contrast, may spend a decade producing a single monograph read closely by a small but expert community. A literary scholar with sixty publications and an h-index of 5, and an engineering professor with an h-index of 45, may both be exceptional researchers in their fields. Comparing them using the same number risks drawing conclusions that the data simply can't support.
Citations aren't the same as quality
This is worth saying plainly: a high citation count doesn't mean a paper is good, and a low one doesn't mean it isn't.
Citations are shaped by other factors than just quality: the size of a research community, how fashionable a topic is, or whether an author is embedded in large collaborative networks. A paper that challenges consensus in a small field may take years to gain traction. A paper on a hot topic with dozens of co-authors may accumulate citations quickly for reasons that have little to do with its contribution to the field.
And citation counts are blind to entire categories of scholarly value. They don't capture whether a researcher's work is original or rigorous. They say nothing about mentorship — the graduate students trained, the careers shaped, or the intellectual culture built over decades. They miss interdisciplinary contributions that don't fit neatly into any one field's citation ecosystem. And they have no way of registering societal or cultural impact: the report that informed public policy, the archive that preserved a community's history, the research that changed how a practitioner does their job.
These aren't peripheral contributions. For many researchers, they're central to their work. The h-index simply cannot see them.
Overreliance on metrics can change research behaviour
There's also a behavioural dimension worth flagging. When metrics carry significant weight in hiring and evaluation, they can quietly shape what researchers choose to work on. Chasing citable topics or prioritizing volume over depth is a rational response to an irrational system, but it's not good for scholarship.
What responsible use looks like
The h-index isn't useless, but it's a starting point, not a conclusion. It's also worth knowing that the same researcher's h-index can vary depending on which database you use, since each platform indexes different sources. If you're citing an h-index, note when and where it came from.
In Canada, major federal granting agencies don't formally require the h-index in grant applications, and CIHR has explicitly stated that assessments for funding, hiring, tenure, and promotion should be based on scientific content rather than publication metrics. But policy and practice aren't the same thing. A 2023 study on CIHR peer review found that gender bias in application ratings emerged specifically when an applicant's h-index was at the lower end of the distribution — suggesting the metric influences reviewer judgment even when it isn't a formal criterion. The gap between what institutions say about metrics and how they function in evaluation rooms is worth keeping in mind.
The fuller picture of a researcher's contributions requires context, expert judgment, and qualitative evidence. Metrics can be one input into that picture, but they shouldn't make up the entire frame.
The video below, produced by ScholCommLab, walks through these issues using concrete examples. It's a useful companion to this piece.
