Welcome to Red Jasper's Center for Journal Ranking (CJR). The center provides a brand new paradigm to ranking more than 7,000 journals from all disciplines.

Introduction: The Magnitude of Journal Ranking

Academic journals immensely contribute to the validation and the dissemination of breakthrough knowledge. For the researcher, the affirmation of a quality journal stems from the ranking indexes in their disciplines that are currently available. The ranking serves as a guide for researchers in the submission of their research to journals they feel, commensurate with their standards.

Further, the extensive propagation of journal rankings plays a monumental role in tenure and promotion evaluations in academia. For evaluators, it is exigent to objectively gauge the quality of a candidate's work since it is highly probable that most, if not all, committee members hail from different disciplines; consequently, evaluators look to the ranking of journals in a candidate's discipline to serve as a surrogate measure.

Oddly, given the significance of having accurate and unbiased journal rankings, there appears to be insufficient scientific and impacting contributions that go into the consideration of ranking journals. In so far, attempts to appraise the quality of journals have resulted in numerous endeavors to measure their influence. However, the approaches available have either been by route of using the Science Citation Index (SCI) - where citations, as a broad form of influence, are often used in these efforts - or via subjective means such as surveys.


Present Ranking Endeavors

  1. Using the SCI impact factor to evaluate the quality of journals has not been well received by academicians as there are several inherent sources of problems. The crux of the problems encountered when using the SCI impact factor is the fact that it assigns all citations to have equal weight. Quite the contrary, not all citations are the same:
    • A citation by a paper in a top journal may far outweigh many citations by papers published in unremarkable journals. However, the SCI impact factor's philosophy ignores this distinction.

    • The SCI impact factor does not apportion enough credit to indirect citations. For instance, a notable theory may at first be published in a top theoretical mathematics journal. This result is then likely to be extended to a number of results in applied journals, and further applied to various domains such as chemical engineering, electrical engineering, and computer sciences. Consequently, many more citations would gravitate towards the applied journals and not to the original paper that discovered this theory. To really evaluate impact, indirect contributions must be taken into account. The SCI impact factor inadvertently dilutes the magnitude of these original contributions!

  2. Due to the apprehension of relying on the SCI impact factor, academicians have maintained the use of surveys to rank journals. While most experts in their domain can, at the drop of a hat, accurately name the top few journals in their respective fields, they will find it extremely hard to accurately evaluate further down the list of journals. This may be due to a lack of accurate information on the citation relationships among the journals. Further, for an expert to make accurate and unbiased assessments to all the journals in a list, an expert would have to submit a paper to every journal in the evaluation list to gain a perspective of a journal.

    As with any subjective effort, surveys are grossly inaccurate if domain experts were tasked to rank 20 or 30 journals in an area. In a typical evaluation list, there could be as many as 100 journals to assess - making surveying a futile exercise. Furthermore, area experts may have preconceived biases in their ranking because of their own experience with the journals and by their own publications.


Ranking Paradigm

A number of models have been derived to rank papers or more notably to rank web sites, i.e., Google's PageRank method uses an extended version of Pinski and Narin's invariant method.

In order to publish in very good places, a number of research literatures have been crafted to justify the validity of these methods via the proposal of axioms and the derivation of proofs of these axioms.

Our work extends current works in several important dimensions. First, we strengthen the graph theoretic properties and axiom analysis of the PageRank model to compute journal influences. We also extend the numeric axiom analysis by Palacios-Huerta and Volij and show that every journal influence ranking system that weakly satisfies these numeric axioms produce the same ranking order as the PageRank algorithm does.

We further propose two new indicators relevant to journal quality by considering both the journal influence index and the paper influence index.

And most importantly, we believe that there is no model that will fit all needs. We understand the magnitude of universal access and broad based consultation with all academicians. As such the Center for Journal Ranking is established to allow all users to validate various scenarios and parameters to rank journals. We will use the CJR as a platform for broad based consultation to continually refine the journal ranking model.

Journal ranking is critically important to research and to the academia, hence doing it right is worthwhile. The establishment of the CJR is our commitment to this endeavor.