The need for Alternatives to Impact Factors
Impact factors are prevalent as they have various implications, strong background, and clear formula. However, there are some issues that may arise: often impact factors do not consider factors, such as the area of work, citation patterns or length of the journals. As a result, experts claim that these numbers cannot be used as indicators for funding decisions or individual success. Consequently, several alternatives have been employed.
Researchers and committees often use citation indicators to measure scientific success. Rankings can help experts analyze how often a journal or an article is likely to be used and predict their influence. There are several indicators which are divided into three groups: ratio-based, portfolio-based, and network-based indicators (“Citation Performance Indicators – A Very Short Introduction,” 2017). This classification is based on the algorithms those indexes follow. Ratio-based indicators, for instance, employ the following formula: the number of citations divided by the number of documents in a journal. The most popular indicator – the impact factor, is an example of a ratio-based indicator. Portfolio-based indicators, on the other hand, assess a ranked set of research documents and calculate a score based on those publications. Network-based indicators are the third type of citation indexes which aim to analyze scientific success within a broader citation network.
Note that regardless of the type, any good indicator should be reliable, consistent, and transparent. Most of all, metrics should not be the ultimate solution behind scientific assessment, funding decisions or job applications.
Impact factors (IF) are the most popular type of ratio-based indicators and citation indexes in general. They were designed as bibliometric indicators to help librarians track literature and purchase new journals (“Impact factors: arbiter of excellence?” 2003). Impact factors show the frequency with which an average article in a given journal has been cited within the past few years. As other ratio-based indicators, impact factors are calculated via the following formula: the number of citations divided by the number of published articles within the past two years. Indexes can be found online; in addition, Clarivate, previously known as Thomson Reuters, publishes a report every year (“Citation Performance Indicators – A Very Short Introduction,” 2017).
Note that another variation is the impact factors that cover a five-year window. They are more suitable in areas of work where the citation life-cycle is long. Social sciences, for instance, can benefit from this longer window. A clear example is the field of psychology, where the publication process is usually slower than other areas of research (“What is considered a good impact factor?” 2017).
CiteScore is also a ratio-based indicator, which is similar to impact factors. The indicator is calculated by dividing the number of citations by the number of publications in a given journal. The only difference is that it covers pieces published in the past three years. Apart from having a wider window, another advantage is that the database employed is based on the broader system Scopus. Scopus contains thousands of articles, including international publications, and online sources. What’s more, it allows free access. The producer of this score is Elsevier and experts update indexes every month (“Citation Performance Indicators – A Very Short Introduction,” 2017). However, these indicators are not perfect either: for instance, CiteScore can be biased when it comes to aspects, such as front matter and news.
Impact per publication (IPP) is another type of a ratio-based indicator. It’s also assessed on a three-year window and the database employed by Scopus. Note that previously, the index was known as raw impact per publication (“Indicators,” 2017). One of the advantages of these metrics is their clarity: the impact per publication includes information only about papers that are classified as articles, reviews or conference publications. However, defining the type of publication can be tricky and lead to misleading data.
Source-normalized impact per paper (SNIP) is also essential. It’s similar to impact per publication; however, it accounts for differences between various fields of work (“SNIP (Source Normalized Impact per Paper),” 2018). This classification gives experts the chance to compare different scientific fields. The assumption is that a source with high citation is more likely to have a high impact per paper. Note, however, that the citation potential varies between fields – it’s not surprising that life sciences and emerging topics have a higher potential.
One of the most popular scores is h-index: h-index is a fundamental portfolio-based indicator, which was first defined by the physicist Jorge Hirsch. The h-index can be used to measure the performance of individual work. It implies that an author with h index will have h pieces published h times (“Google Scholar Metrics”). The main advantage is that the h-index focuses on individual work and career. However, like any other indicator, h-indexes are prone to bias. For example, scores can increase with age and productivity. In addition, it can be influenced by self-citation, which is a controversial topic.
Another variation is the h-5, which includes articles published in the previous five years. H-5 is powered by Google Scholar and can help experts organize titles chronologically.
Eigenfactors are the most popular network-based indexes (“Eigenfactor Score”). They are complicated indicators which are extremely useful in measuring the influence of a journal in a whole citation network (with a five-year window). A disadvantage, however, is that this indicator is too complicated and not easily replicable. In fact, eigenfactor measures the total number of citations, so journals with more articles and variety of topics will get a higher score. In contrast, impact factors measure how many citations a single article in a given journal has received (“Eigenfactor vs. Impact Factor: How are They Different?” 2018).
SCImago journal rank (SJR) is another major network-based indicator (“Citation Performance Indicators – A Very Short Introduction,” 2017). It’s similar to eigenfactors – this method assesses whole citation networks. In contrast to eigenfactors, though, SJR covers three years prior publication and includes self-citation (“Network-based Citation Metrics: Eigenfactor vs. SJR,” 2015). SJR is a powerful alternative to impact factors; in fact, one of the main advantages is that scores are based on the broad Scopus dataset.
Relative citation ratio (RCR), on the other side, is defined as a field-normalized indicator for articles, which is based on the NIH’s PubMed database (“Citation Performance Indicators – A Very Short Introduction,” 2017). Factors, such as range of years and type of articles can be selected, which is a feature that gives experts the chance to define articles by their own citation network (“iCite”). Note that the field of work is defined by the references in the articles cited along with the publication of interest. At the same time, this feature makes the RCR indexes sensitive to external and interdisciplinary citations.
In conclusion, there are various metrics and several alternatives to impact factors, all with their benefits and disadvantages. There are different databases and search options available to researchers, publishers, librarians, and enthusiasts. Metrics, in general, give experts a clear way to assess the value of a journal or an individual article. However – although numbers matter -indicators are only a fraction of scientific success.
Citation Performance Indicators – A Very Short Introduction (2017, May 15). Retrieved from https://scholarlykitchen.sspnet.org/2017/05/15/citation-performance-indicators-short-introduction/
Eigenfactor vs. Impact Factor: How are They Different (2018, May 16). Retrieved from https://www.enago.com/academy/eigenfactor-vs-impact-factor/
Eigenfactor Score. Retrieved from http://ipscience-help.thomsonreuters.com/incitesLiveJCR/glossaryAZgroup/g6/7791-TRS.html
Frank, M. (2003). Impact factors: arbiter of excellence? Journal of the Medical Library Association, 91 (1).
Google Scholar Metrics. Retrieved from https://scholar.google.com/intl/en/scholar/metrics.html#metrics
Indicators (2017). Retrieved from http://www.journalindicators.com/methodology
Network-based Citation Metrics: Eigenfactor vs. SJR (2015, July 28). Retrieved from https://scholarlykitchen.sspnet.org/2015/07/28/network-based-citation-metrics-eigenfactor-vs-sjr/
SNIP (Source Normalized Impact per Paper) (2018). Retrieved from http://help.elsevier.com/app/answers/detail/a_id/2900/p/8800/related/1
What is considered a good impact factor? (2017, August 08). Retrieved from http://mdanderson.libanswers.com/faq/26159
iCite. Retrieved from https://icite.od.nih.gov/