Citation Metrics (Measurement)

Journal Metric : SNIP & SJR

SNIP and SJR are provided by SCOPUS. The definition is as follows:

  • SNIP : Source-Normalized Impact per Paper
  • SJR   : SCImago Journal Rank

SCImago :

  • SCImago Journal Rank (SJR indicator) is a measure of scientific influence of scholarly journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from:

QS World University Ranking : of SCOPUS & Web of Science

From Wikipedia, the free encyclopedia

The QS World University Rankings is a ranking of the world’s top 500 universities by Quacquarelli Symonds using a method that has published annually since 2004.

The QS rankings were originally published in publication with Times Higher Education from 2004 to 2009 as the Times Higher Education-QS World University Rankings. In 2010, Times Higher Education and QS ended their collaboration. QS assumed sole publication of the existing methodology, while Times Higher Education created a new ranking methodology, published as Times Higher Education World University Rankings.


§ 1 History

§ 2 Method

§ 2.1 Academic peer review (40%)

§ 2.2 Recruiter review (10%)

§ 2.3 Faculty student ratio (20%)

§ 2.4 Citations per faculty (20%)

§ 2.5 International orientation (10%)

§ 2.6 Data sources

§ 2.7 Aggregation

§ 2.8 Classifications

§ 2.9 Results

§ 2.10 Faculty-level analysis

§ 3 Effects

§ 4 2010 rankings

§ 5 Top 20 in the QS World University Rankings

§ 6 Commentary

§ 6.1 Criticism

§ 7 QS Asian University Rankings

§ 8 QS World University Rankings by Subject

§ 9 References

§ 10 External links



The need for an international ranking of universities was highlighted in December 2003 in Richard Lambert’s review of university-industry collaboration in Britain for HM Treasury, the finance ministry of the United Kingdom. Amongst its recommendations were world university rankings, which Lambert said would help the UK to gauge the global standing of its universities.

The idea for the rankings was credited in Ben Wildavsky’s book, The Great Brain Race: How Global Universities are Reshaping the World, to then-editor of Times Higher Education, John O’Leary. Times Higher Education chose to partner with educational and careers advice company Quacquarelli Symonds (QS) to supply the data, appointing Martin Ince, formerly deputy editor and later a contractor to THE, to manage the project.

Between 2004 and 2009, Quacquarelli Symonds (QS) produced the rankings in partnership with Times Higher Education (THE). In 2009, THE announced they would produce their own rankings, the Times Higher Education World University Rankings, in partnership with Thomson Reuters. After criticism from universities, THE cited a weakness in the methodology of the original rankings, as well as a perceived favoritism in the existing methodology for science over the humanities, as one of the key reasons for the decision to split with QS.

QS retained the intellectual property in the Rankings and the methodology used to compile them and continues to produce the rankings, now called the QS World University Rankings. THE created a new methodology, first published independently as the Times Higher Education World University Rankings in September 2010.

QS publishes the results of the original methodology in key media around the world, including US News & World Report in the USA, Chosun Ilbo in Korea, Nouvel Observateur in France, and The Sunday Times in the UK. The first of these QS-only produced rankings, which uses the original methodology, was released on September 8, 2010.


QS tried to design its rankings to look at a broad range of university activity. The six indicators are used.

Academic peer review (40%)

The most controversial part of the QS World University Rankings is their use of an opinion survey referred to as the Academic Peer Review. Using a combination of purchased mailing lists and applications and suggestions, this survey asks active academics across the world about the top universities in fields they know about. QS has published the job titles and geographical distribution of the participants.

The 2010 rankings included results from 15,050 people in its Academic Peer Review, including votes from the previous two years rolled forward provided there was no more recent information available from the same individual. Participants can nominate up to 30 universities but are not able to vote for their own. They tend to nominate a median of about 20, which means that over 170,000 data points make up this survey.

In 2004 when the rankings first appeared, academic peer review accounted for half of a university’s possible score. In 2005, its share was cut to 40 per cent because of the introduction of the Recruiter Review.

Recruiter review (10%)

This part of the ranking is obtained by a similar method to the Academic Peer Review, except that it samples recruiters who hire graduates on a global scale. The numbers are smaller – 5007 responses in 2010 – and are used to produce 10 per cent of any university’s possible score. This survey was introduced on the assumption that employers can accurately track graduate quality.

Faculty student ratio (20%)

These two indicators account for 50 per cent of a university’s possible score in the rankings. A further 20 per cent comes from a university’s ratio of faculty to students. These indicators attempt to measure teaching commitment, but QS has admitted they are less than satisfactory.

Citations per faculty (20%)

Citations of published research are among the most widely used inputs to national and global university rankings. The QS World University Rankings used citations data from Thomson (now Thomson Reuters) from 2004 to 2007, and since then uses data from Scopus, part of Elsevier. The total number of citations for a five-year period is divided by the number of academic staff in a university to yield the score for this measure, which accounts for 20 per cent of a university’s possible score in the Rankings.

QS has explained that it uses this approach, rather than the citations per paper preferred for other systems, because it reduces the effect of biomedical science on the overall picture – bio-medicine has a ferocious “publish or perish” culture. Instead it attempts to measure the density of research-active staff at each institution. But issues still remain about the use of citations in ranking systems, especially the fact that the arts and humanities generate comparatively few citations.

QS has produced some data collection errors regarding citations per faculty.

One interesting issue is the difference between the Scopus and Thomson Reuters databases. For major world universities, the two systems capture more or less the same publications and citations. For less mainstream institutions, Scopus has more non-English language and smaller-circulation journals in its database, but as the papers there are less heavily cited, this can also mean fewer citations per paper on average.

International orientation (10%)

The final ten per cent of a university’s possible score is derived from measures intended to capture their internationalism: 5 per cent from their percentage of international students, and another 5 per cent from their percentage of international staff.

Data sources

The information used to compile the World University Ranking comes partly from the online surveys carried out by QS, partly from Scopus, and partly from an annual information-gathering exercise carried out by QS itself. QS collects data from universities directly and from their web sites and publications, and from national bodies such as education ministries and the National Center for Education Statistics in the US and the Higher Education Statistics Agency in the UK.


The data is aggregated into columns according to its Z score, an indicator of how far removed any institution is from the average. Between 2004 and 2007 a different system was used whereby the top university for any measure was scaled as 100 and the others received a score reflecting their comparative performance. According to QS, this method was dropped because it gives too much weight to some exceptional outliers, such as the very high faculty/student ratio of the California Institute of Technology.


In 2009, a column of classifications was introduced to provide additional context to the rankings tables. Universities are classified by size, defined by the size of the student body; comprehensive or specialist status, defined by the range of faculty areas in which programs are offered; and research activity, defined by the number of papers published in a five-year period.


QS makes the point that its rankings are intended to assess large, general research institutions, not specialist ones. To be included in the QS World University Rankings, institutions must teach in at least two of the five main areas of academic life (the social sciences, the arts and humanities, biomedicine, engineering and the physical sciences), and must teach undergraduates.

Faculty-level analysis

QS also publishes a simple analysis of the top 100 institutions in each of the five faculty-level areas mentioned above: natural sciences, technology, biology and medicine, social sciences and the arts and humanities. These five tables list universities in order of their Academic Peer Review score. They also give the citations per paper for each institution.

QS does not aggregate these scores and has said that doing so would not produce a meaningful result. It uses citations per paper rather than per person partly because it does not hold details of the academic staff in each subject area, and partly because the number of citations per paper should be a consistent indicator of impact within a specific field.


Rankings are widely read by students and academics. Some universities have a target to be well-placed in the rankings. In July 2010, Queen’s University Belfast (UK) was advertising with the slogan “Destination Global Top 100.” QS continues to produce the rankings while Times Higher Education publishes a new ranking and methodology, the Times Higher Education World University Rankings, in collaboration with Thomson Reuters. QS has formed an international advisory board for the Rankings, convened by Martin Ince, and with members in Europe, Asia, Africa and North and South America.

2010 rankings

The 2010 QS World University Rankings show the University of Cambridge is now ranked first in the world. Harvard University falls to second place, ahead of Yale University in third place. University College London remained in fourth place, ahead of Oxford and Imperial College.Massachusetts Institute of Technology ranked fifth. Overall, there has been a strong performance by technology universities in the 2010 rankings.

The 2010 QS World University Rankings also showed: The US has by far the most universities in the top 100, although the total of 53 is down on last year. UC Berkeley experienced the biggest rise in the top 30, jumping 11 places to 28th, just behind the University of Bristol. Overall there were 13 US universities in the top 20. 33 countries (including the Hong Kong special administrative region of China) had at least one university in the top 200. The UK had 30, and ETH Zurich at 18th was the top institution not working mainly in English.


Several universities in the UK and the Asia-Pacific region have commented on the rankings positively. Vice-Chancellor of New Zealand’sMassey University, Professor Judith Kinnear, says that the Times Higher Education-QS ranking is a “wonderful external acknowledgement of several University attributes, including the quality of its research, research training, teaching and employability.” She says the rankings are a true measure of a university’s ability to fly high internationally: “The Times Higher Education ranking provides a rather more and more sophisticated, robust and well rounded measure of international and national ranking than either New Zealand’s Performance Based Research Fund (PBRF) measure or the Shanghai rankings.”

Vice-Chancellor of the University of Wollongong in Australia, Professor Gerard Sutton, said the ranking was a testament to a university’s standing in the international community, identifying… “an elite group of world-class universities.”

Martin Ince, chair of the Advisory Board for the Rankings, points out that their volatility has been reduced since 2007 by the introduction of the Z-score calculation method and that over time, the quality of QS’s data gathering has improved to reduce anomalies. In addition, the academic review is now so big that even modestly ranked universities receive a statistically valid number of votes.


The THE-QS World University Rankings have been criticised by many for placing too much emphasis on peer review, which receives 40 per cent of the overall score. Some people have expressed concern about the manner in which the peer review has been carried out. In a report, Peter Wills from the University of Auckland, New Zealand wrote of the Times Higher Education-QS World University Rankings:

But we note also that this survey establishes its rankings by appealing to university staff, even offering financial enticements to participate (see Appendix II). Staff are likely to feel it is in their greatest interest to rank their own institution more highly than others. This means the results of the survey and any apparent change in ranking are highly questionable, and that a high ranking has no real intrinsic value in any case. We are vehemently opposed to the evaluation of the University according to the outcome of such PR competitions.

Quacquarelli Symonds has been faulted for some data collection errors. Between 2006 and 2007 Washington University in St. Louis fell from 48th to 161st because QS confused it with the University of Washington in Seattle. QS committed a similar error when collecting data forFortune Magazine confusing the University of North Carolina’s Kenan-Flagler business school with one from North Carolina Central University.

Some errors have also been reported in the faculty-student ratio used in the ranking. At the 16th Annual New Zealand International Education Conference held at Christchurch, New Zealand in August 2007, Simon Marginson presented a paper that outlines the fundamental flaws underlying the Times Higher Education-QS World University Rankings. A similar article (also published by the same author) appeared inThe Australian newspaper in December 2006. Some of the points mentioned include:

Half of the THES index is comprised by existing reputation: 40 per cent by a reputational survey of academics (‘peer review’), and another 10 per cent determined by a survey of ‘global employers’. The THES index is too easily open to manipulation as it is not specified who is surveyed or what questions are asked. By changing the recipients of the surveys, or the way the survey results are factored in, the results can be shifted markedly.

  1. The pool of responses is heavily weighted in favour of academic ‘peers’ from nations where The Times is well-known, such as the UK, Australia, New Zealand, Malaysia and so on.
  2. It’s good when people say nice things about you, but it is better when those things are true. It is hard to resist the temptation to use the THES rankings in institutional marketing, but it would be a serious strategic error to assume that they are soundly based.
  3. Results have been highly volatile. There have been many sharp rises and falls, especially in the second half of the THEStop 200 where small differences in metrics can generate large rankings effects. Fudan in China has oscillated between 72 and 195, RMIT in Australia between 55 and 146. In the US, Emory has risen from 173 to 56 and Purdue fell from 59 to 127.

THES-QS had introduced several changes in methodology in 2007 which were aimed at addressing some of the above criticisms, the ranking has continued to attract criticisms. In an article in the peer-reviewed BMC Medicine authored by several scientists from the US and Greece, it was pointed out:

If properly performed, most scientists would consider peer review to have very good construct validity; many may even consider it the gold standard for appraising excellence. However, even peers need some standardized input data to peer review. The Times simply asks each expert to list the 30 universities they regard as top institutions of their area without offering input data on any performance indicators. Research products may occasionally be more visible to outsiders, but it is unlikely that any expert possesses a global view of the inner workings of teaching at institutions worldwide. Moreover, the expert selection process of The Times is entirely unclear. The survey response rate among the selected experts was only <1% in 2006 (1,600 of 190,000 contacted). In the absence of any guarantee for protection from selection biases, measurement validity can be very problematic.

Alex Usher, vice president of Higher Education Strategy Associates in Canada, commented:

Most people in the rankings business think that the main problem with The Times is the opaque way it constructs its sample for its reputational rankings – a not-unimportant question given that reputation makes up 50% of the sample. Moreover, this year’s switch from using raw reputation scores to using normalized Z-scores has really shaken things up at the top-end of the rankings by reducing the advantage held by really top universities - University of British Columbia (UBC) for instance, is now functionally equivalent to Harvard in the Peer Review score, which, no disrespect to UBC, is ludicrous. I’ll be honest and say that at the moment the THES Rankings are an inferior product to the Shanghai Jiao Tong’s Academic Ranking of World Universities.

Academics have also been critical of the use of the citation database, arguing that it undervalues institutions who excel in the social sciences. Ian Diamond, former chief executive of the Economic and Social Research Council and now vice-chancellor of the University of Aberdeen wrote to Times Higher Education in 2007, saying:

The use of a citation database must have an impact because such databases do not have as wide a cover of the social sciences (or arts and humanities) as the natural sciences. Hence the low position of the London School of Economics, caused primarily by its citations score, is a result not of the output of an outstanding institution but the database and the fact that the LSE does not have the counterweight of a large natural science base.

Criticism of the Times Higher Education-QS league tables also came from Andrew Oswald, professor of economics at University of Warwick:

This put Oxford and Cambridge at equal second in the world. Lower down, at around the bottom of the world top-10, came University College London, above MIT. A university with the name of Stanford appeared at number 19 in the world. The University of California at Berkeley was equal to Edinburgh at 22 in the world. Such claims do us a disservice. The organisations who promote such ideas should be unhappy themselves, and so should any supine UK universities who endorse results they view as untruthful. Using these league table results on your websites, universities, if in private you deride the quality of the findings, is unprincipled and will ultimately be destructive of yourselves, because if you are not in the truth business what business are you in, exactly? Worse, this kind of material incorrectly reassures the UK government that our universities are international powerhouses. Let us instead, a bit more coolly, do what people in universities are paid to do. Let us use reliable data to try to discern the truth. In the last 20 years, Oxford has won no Nobel Prizes. (Nor has Warwick.) Cambridge has done only slightly better. Stanford University in the United States, purportedly number 19 in the world, garnered three times as many Nobel Prizes over the past two decades as the universities of Oxford and Cambridge did combined.

The most recent criticism of the old system came from Fred L. Bookstein, Horst Seidler, Martin Fieder and Georg Winckler in the journalScientomentrics for the unreliability of QS’s methods:

Several individual indicators from the Times Higher Education Survey (THES) data base the overall score, the reported staff-to-student ratio, and the peer ratings—demonstrate unacceptably high fluctuation from year to year. The inappropriateness of the summary tabulations for assessing the majority of the “top 200” universities would be apparent purely for reason of this obvious statistical instability regardless of other grounds of criticism. There are far too many anomalies in the change scores of the various indices for them to be of use in the course of university management.

QS Asian University Rankings

In 2009, Quacquarelli Symonds (QS) launched the QS Asian University Rankings in partnership with The Chosun Ilbo newspaper in Korea. It ranks the top 200 Asian universities and has now appeared twice. The University of Hong Kong was top in 2010 and The Hong Kong University of Science & Technology is currently the top in 2011.

QS World University Rankings by Subject

In 2011, QS began ranking universities around the world by subject. The rankings are based on citations, academic peer review, and recruiter review, with the weightings for each dependent upon the culture and practice of the subject concerned. They are published in five “clusters;” engineering; biomedicine; the natural sciences; the social sciences; and the arts and humanities.


  1. ^ Princeton University Press, 2010
  2. ^ Mroz, Ann. “Leader: Only the best for the best”. Times Higher Education. Retrieved 2010-09-16.
  3. ^ Baty, Phil (2010-09-10). “Views: Ranking Confession”. Inside Higher Ed. Retrieved 2010-09-16.
  4. ^ Labi, Aisha (2010-09-15). “Times Higher Education Releases New Rankings, but Will They Appease Skeptics?”The Chronicle of Higher Education (London, UK). Retrieved 2010-09-16.
  5. ^
  6. ^ “QS World University Rankings Advisory Board”. Top Universities. 2009-11-12. Retrieved 2010-09-16.
  7. ^ “THE-QS World University Rankings 2011″.
  8. ^ “THE-QS World University Rankings 2010″.
  9. ^ “THE-QS World University Rankings 2009″.
  10. ^ “THE-QS World University Rankings 2008″.
  11. ^ “THE-QS World University Rankings 2007″.
  12. ^ “THE-QS World University Rankings 2006″.
  13. ^ “THE-QS World University Rankings 2005″.
  14. ^ “THE-QS World University Rankings 2004″.
  15. ^ Flying high internationally[dead link]
  16. ^ “UOW listed in Top 200 World University Rankings”
  17. ^
  18. ^ Holmes, Richard (2006-09-05). “So That’s how They Did It”. Retrieved 2010-09-16.
  19. ^ Response to Review of Strategic Plan by Peter Wills
  20. ^ “University Ranking Watch”. 2007-11-11. Retrieved 2010-09-16.
  21. ^ “Rankings: Marketing Mana or Menace? by Simon Marginson” (PDF). Retrieved 2010-09-16.
  22. ^ December 06, 2006 12:00AM (2006-12-06). “Rankings Ripe for Misleading by Simon Marginson”. Retrieved 2010-09-16.
  23. ^ Sowter, Ben (1 November 2007). THES – QS World University Rankings 2007 – Basic explanation of key enhancements in methodology for 2007
  24. ^ “” (PDF). Retrieved 2010-09-16.
  25. ^ “Social sciences lose 1″. 2007-11-16. Retrieved 2010-09-16.
  26. ^ “There’s nothing Nobel in deceiving ourselves by Andrew Oswald, The Independent on Sunday”. 2007-12-13. Retrieved 2010-09-16.
  27. ^ “Scientometrics, Volume 85, Number 1″. SpringerLink. Retrieved 2010-09-16.

External links:

Citation Metrics: by Publish or Perish

Citation metrics

Publish or Perish calculates the following citation metrics:

  • Total number of papers
  • Total number of citations
  • Average number of citations per paper
  • Average number of citations per author
  • Average number of papers per author


Hirsch’s h-index and related parameters, shown as h-index andHirsch a=y.yy, m=z.zz in the output. Also Zhang’s e-index.

Egghe’s g-index, shown as g-index in the output

The contemporary h-index, shown as hc-index and ac=y.yy in the output

Three variations of the individual h-index, shown as hI-index,hI,norm, and hm-index in the output

The age-weighted citation rate

An analysis of the number of authors per paper.

Please note that these metrics are only as good as their input. We recommend that you consult the following topics for information about the limitations of the citation metrics and the underlying sources that Publish or Perish uses:



The h-index was proposed by J.E. Hirsch in his paper An index to quantify an individual’s scientific research output,arXiv:physics/0508025 v5 29 Sep 2005. It is defined as follows:

A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np-h) papers have no more than h citations each.

It aims to measure the cumulative impact of a researcher’s output by looking at the amount of citation his/her work has received. Publish or Perish calculates and displays the h index proper, its associated proportionality constant a (from Nc,tot = ah2), and the rate parameter m (from h ~ mn, where n is the number of years since the first publication).

The properties of the h-index have been analyzed in various papers; see for example Leo Egghe and Ronald Rousseau: An informetric model for the Hirsch-index, Scientometrics, Vol. 69, No. 1 (2006), pp. 121-129.

Publish or Perish also calculates the e-index as proposed by Chun-Ting Zhang in his paper The e-index, complementing the h-index for excess citations, PLoS ONE, Vol 5, Issue 5 (May 2009), e5429. The e-index is the (square root) of the surplus of citations in the h-set beyond h2, i.e., beyond the theoretical minimum required to obtain a h-index of ‘h’. The aim of the e-index is to differentiate between scientists with similar h-indices but different citation patterns.

These metrics are shown as h-indexHirsch a=y.yy, m=z.zz, ande-index in the output.


The g-index was proposed by Leo Egghe in his paper Theory and practice of the g-index, Scientometrics, Vol. 69, No 1 (2006), pp. 131-152. It is defined as follows:

[Given a set of articles] ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g2citations.

It aims to improve on the h-index by giving more weight to highly-cited articles.

This metric is shown as g-index in the output.

Contemporary h-index

The Contemporary h-index was proposed by Antonis Sidiropoulos, Dimitrios Katsaros, and Yannis Manolopoulos in their paperGeneralized h-index for disclosing latent facts in citation networksarXiv:cs.DL/0607066 v1 13 Jul 2006.

It adds an age-related weighting to each cited article, giving (by default; this depends on the parametrization) less weight to older articles. The weighting is parametrized; the Publish or Perish implementation uses gamma=4 and delta=1, like the authors did for their experiments. This means that for an article published during the current year, its citations account four times. For an article published 4 years ago, its citations account only one time. For an article published 6 years ago, its citations account 4/6 times, and so on.

This metric is shown as hc-index and ac=y.yy in the output.

Individual h-index (3 variations)

The Individual h-index was proposed by Pablo D. Batista, Monica G. Campiteli, Osame Kinouchi, and Alexandre S. Martinez in their paperIs it possible to compare researchers with different scientific interests?, Scientometrics, Vol 68, No. 1 (2006), pp. 179-189.

It divides the standard h-index by the average number of authors in the articles that contribute to the h-index, in order to reduce the effects of co-authorship; the resulting index is called hI.

Publish or Perish also implements an alternative individual h-index, hI,norm, that takes a different approach: instead of dividing the total h-index, it first normalizes the number of citations for each paper by dividing the number of citations by the number of authors for that paper, then calculates hI,norm as the h-index of the normalized citation counts. This approach is much more fine-grained than Batista et al.’s; we believe that it more accurately accounts for any co-authorship effects that might be present and that it is a better approximation of the per-author impact, which is what the original h-index set out to provide.

The third variation is due to Michael Schreiber and first described in his paper To share the fame in a fair way, hm modifies h for multi-authored manuscripts, New Journal of Physics, Vol 10 (2008), 040201-1-8. Schreiber’s method uses fractional paper counts instead of reduced citation counts to account for shared authorship of papers, and then determines the multi-authored hm index based on the resulting effective rank of the papers using undiluted citation counts.

These metrics are shown as hI-index (Batista et al.’s), hI,norm(PoP’s), and hm-index (Schreiber’s) in the output.

Age-weighted citation rate (AWCR, AWCRpA) and AW-index

The age-weighted citation rate was inspired by Bihui Jin’s note The AR-index: complementing the h-index, ISSI Newsletter, 2007, 3(1), p. 6.

The AWCR measures the number of citations to an entire body of work, adjusted for the age of each individual paper. It is an age-weighted citation rate, where the number of citations to a given paper is divided by the age of that paper. Jin defines the AR-index as the square root of the sum of all age-weighted citation counts over all papers that contribute to the h-index.

However, in the Publish or Perish implementation we sum over all papers instead, because we feel that this represents the impact of the total body of work more accurately. (In particular, it allows younger and as yet less cited papers to contribute to the AWCR, even though they may not yet contribute to the h-index.)

The AW-index is defined as the square root of the AWCR to allow comparison with the h-index; it approximates the h-index if the (average) citation rate remains more or less constant over the years.

The per-author age-weighted citation rate is similar to the plain AWCR, but is normalized to the number of authors for each paper.

These metrics are shown as AWCRAWCRpA and AW-index in the output.

Last Updated on Thursday, 12 September 2013 07:33