The need for Alternatives to Impact Factors
Impact factors are prevalent as they have various implications, strong background, and clear formula. However, there are some issues that may arise: often impact factors do not consider factors, such as the area of work, citation patterns or length of the journals. As a result, experts claim that these numbers cannot be used as indicators for funding decisions or individual success. Consequently, several alternatives have been employed.
Researchers and committees often use citation indicators to measure scientific success. Rankings can help experts analyze how often a journal or an article is likely to be used and predict their influence. There are several indicators which are divided into three groups: ratio-based, portfolio-based, and network-based indicators (“Citation Performance Indicators – A Very Short Introduction,” 2017). This classification is based on the algorithms those indexes follow. Ratio-based indicators, for instance, employ the following formula: the number of citations divided by the number of documents in a journal. The most popular indicator – the impact factor, is an example of a ratio-based indicator. Portfolio-based indicators, on the other hand, assess a ranked set of research documents and calculate a score based on those publications. Network-based indicators are the third type of citation indexes which aim to analyze scientific success within a broader citation network.
Note that regardless of the type, any good indicator should be reliable, consistent, and transparent. Most of all, metrics should not be the ultimate solution behind scientific assessment, funding decisions or job applications.
Impact factors (IF) are the most popular type of ratio-based indicators and citation indexes in general. They were designed as bibliometric indicators to help librarians track literature and purchase new journals (“Impact factors: arbiter of excellence?” 2003). Impact factors show the frequency with which an average article in a given journal has been cited within the past few years. As other ratio-based indicators, impact factors are calculated via the following formula: the number of citations divided by the number of published articles within the past two years. Indexes can be found online; in addition, Clarivate, previously known as Thomson Reuters, publishes a report every year (“Citation Performance Indicators – A Very Short Introduction,” 2017).
Note that another variation is the impact factors that cover a five-year window. They are more suitable in areas of work where the citation life-cycle is long. Social sciences, for instance, can benefit from this longer window. A clear example is the field of psychology, where the publication process is usually slower than other areas of research (“What is considered a good impact factor?” 2017).
CiteScore is also a ratio-based indicator, which is similar to impact factors. The indicator is calculated by dividing the number of citations by the number of publications in a given journal. The only difference is that it covers pieces published in the past three years. Apart from having a wider window, another advantage is that the database employed is based on the broader system Scopus. Scopus contains thousands of articles, including international publications, and online sources. What’s more, it allows free access. The producer of this score is Elsevier and experts update indexes every month (“Citation Performance Indicators – A Very Short Introduction,” 2017). However, these indicators are not perfect either: for instance, CiteScore can be biased when it comes to aspects, such as front matter and news.
Impact per publication (IPP) is another type of a ratio-based indicator. It’s also assessed on a three-year window and the database employed by Scopus. Note that previously, the index was known as raw impact per publication (“Indicators,” 2017). One of the advantages of these metrics is their clarity: the impact per publication includes information only about papers that are classified as articles, reviews or conference publications. However, defining the type of publication can be tricky and lead to misleading data.
Source-normalized impact per paper (SNIP) is also essential. It’s similar to impact per publication; however, it accounts for differences between various fields of work (“SNIP (Source Normalized Impact per Paper),” 2018). This classification gives experts the chance to compare different scientific fields. The assumption is that a source with high citation is more likely to have a high impact per paper. Note, however, that the citation potential varies between fields – it’s not surprising that life sciences and emerging topics have a higher potential.
One of the most popular scores is h-index: h-index is a fundamental portfolio-based indicator, which was first defined by the physicist Jorge Hirsch. The h-index can be used to measure the performance of individual work. It implies that an author with h index will have h pieces published h times (“Google Scholar Metrics”). The main advantage is that the h-index focuses on individual work and career. However, like any other indicator, h-indexes are prone to bias. For example, scores can increase with age and productivity. In addition, it can be influenced by self-citation, which is a controversial topic.
Another variation is the h-5, which includes articles published in the previous five years. H-5 is powered by Google Scholar and can help experts organize titles chronologically.
Eigenfactors are the most popular network-based indexes (“Eigenfactor Score”). They are complicated indicators which are extremely useful in measuring the influence of a journal in a whole citation network (with a five-year window). A disadvantage, however, is that