Citation impact

[14] Of Thomson Reuter's Web of Science database with more than 58 million items only 14,499 papers (~0.026%) had more than 1,000 citations in 2014.

[16] Each measure has advantages and disadvantages,[17] spanning from bias to discipline-dependence and limitations of the citation data source.

In a study based on the Web of Science database across 118 scientific disciplines, the top 1% most-cited authors accounted for 21% of all citations.

For instance, most papers in Nature (impact factor 38.1, 2016) were only cited 10 or 20 times during the reference year (see figure).

[24] Citation counts follow mostly a lognormal distribution, except for the long tail, which is better fit by a power law.

The advent of citation impact metrics starting around the 1960s created incentives and pressure from their institutions for scientists to publish works in journals known for publishing highly cited papers, which in turn increased the subscription demand and prices for those journals.

[26] Technology historian Edward Tenner points out that a paper which makes an incorrect claim concerning a fundamental topic can attract a large number of citations for the purpose of debunking it; citation impact is thus not a good measure of quality or accuracy.

[26] An alternative approach to measure a scholar's impact relies on usage data, such as number of downloads from publishers and analyzing citation performance, often at article level.

[27][28][29][30] As early as 2004, the BMJ published the number of views for its articles, which was found to be somewhat correlated to citations.

These "tweetations" proved to be a good indicator of highly cited articles, leading the author to propose a "Twimpact factor", which is the number of Tweets it receives in the first seven days of publication, as well as a Twindex, which is the rank percentile of an article's Twimpact factor.

[32] In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, Université de Montréal, Imperial College London, PLOS, eLife, EMBO Journal, The Royal Society, Nature and Science proposed citation distributions metrics as alternative to impact factors.

[42] Also it is found that scholars engage in ethically questionable behavior in order to inflate the number of citations articles receive.

[45][46] The latter model is even used as a predictive tool for determining the citations that might be obtained at any time of the lifetime of a corpus of publications.