Analysis: Correlating the PLoS article level metrics

A few months ago, the Public Library of Science (PLoS) made available a spreadsheet with article level metrics. Although others have already analyzed these data (see posts by Mike Chelen), I decided to take a closer look at the PLoS article level metrics.

The data set consists of 20 different article level metrics. However, some of these are very sparse and some are partially redundant. I thus decided to filter/merge these to create a reduced set of only 6 metrics:

  1. Blog posts. This value is the sum of Blog Coverage – Postgenomic, Blog Coverage – Nature Blogs, and Blog Coverage – Bloglines. A single blog post may obviously be picked up by multiple of these resources and hence be counted more than once. Being unable to count unique blog posts referring to a publication, I decided to aim for maximal coverage by using the sum rather than using data for only a single resource.
  2. Bookmarks. This value is the sum of Social Bookmarking – CiteULike and Social Bookmarking – Connotea. One cannot rule out that a single user bookmarks the same publication in both CiteULike and Connotea, but I would assume that most people use one or the other for bookmarking.
  3. Citations. This value is the sum of Citations – CrossRef, Citations – PubMed Central, and Citations – Scopus. I decided to use the sum to be consistent with the other metrics, but a single citation may obviously be picked up by more than one of these resources.
  4. Downloads. This value is called Combined Usage (HTML + PDF + XML) in the original data set and is the sum of Total HTML Page Views, Total PDF Downloads, and Total XML Downloads. Again the sum is used to be consistent.
  5. Ratings. This value is called Number of Ratings in the original data set. Because of the small number of articles with rating, notes, and comments, I decided to discard the related values Average Rating, Number of Note threads, Number of replies to Notes, Number of Comment threads, Number of replies to Comments, and Number of ‘Star Ratings’ that also include a text comment.
  6. Trackbacks. This value is called Number of Trackbacks in the original data set. I was greatly in doubt whether to merge this into the blog post metric, but in the end decided against doing so because trackbacks do not necessarily originate from blog posts.

Calculating all pairwise correlations among these metrics is obviously trivial. However, one has to be careful when interpreting the correlations as there are at least two major confounding factors. First, it is important to keep in mind that the PLoS article level metrics have been collected across several journals. Some of these journals are high impact journals such as PLoS Biology and PLoS Medicine, whereas others are lower impact journals such as PLoS ONE. One would expect that papers published in the former two journals will on average have higher values for most metrics than the latter journal. Papers published in journals with a web-savvy readership, e.g. PLoS Computational Biology, are more likely to receive blog posts and social bookmarks. Second, the age of a paper matters. Both downloads and in particular citations accumulate over time. To correct for these confounding factors, I constructed a normalized set of article level metrics, in which each metric for a given article was divided by the average for articles published the same year in the same journal.

I next calculated all pairwise Pearson correlation coefficients among the reduced set of article level metrics. To see the effect of the normalization, I did this for both the raw and the normalized metrics. I visualized the correlation coefficients as a heat map, showing the results for the raw metrics above the diagonal and the results for the normalized metrics below the diagonal.

There are a several interesting observations to be made from this figure:

  • Downloads correlate strongly with all the other metrics. This is hardly surprising, but it is reassuring to see that these correlations are not trivially explained by age and journal effects.
  • Bookmarks is the metric that apart from number of downloads correlates most strongly with Citations. This makes good sense since CiteULike and Connotea are commonly used as reference managers. If you add a paper to you bibliography database, you will likely cite it at some point.
  • Blog posts and Trackbacks correlate well with Downloads but poorly with citations. This may reflect that blog posts about research papers are often targeted towards a broad audience; if most of the readers of the blog posts are laymen or researchers from other fields, they will be unlikely to cite the papers covered in the blog posts.
  • Ratings correlates fairly poorly with every other metric. Combined with the low number of ratings, this makes me wonder if the option to rate papers on the journal web sites is all that useful.

Finally, I will point out one additional metrics that I would very much like to see added in future versions of this data set, namely microblogging. I personally discover many papers through others mentioning them on Twitter or FriendFeed. Because of the much smaller the effort involved in microblogging a paper as opposed to writing a full blog post about it, I suspect that the number of tweets that link to a paper would be a very informative metric.

Edit: I made a mistake in the normalization program, which I have now corrected. I have updated the figure and the conclusions to reflect the changes. It should be noted that some comments to this post were made prior to this correction.

6 thoughts on “Analysis: Correlating the PLoS article level metrics

  1. Kay at Suicyte

    Hi Lars,
    interesting analysis, congratulations!

    There is one thing I don’t get, though. I am not convinced that one aspect of your normalization was such a good idea: dividing a metric by the average value of this metric for a given journal. While it is clearly appropriate to normalize for time effects, normalization for journal effects is valid only under the asumption that all journals should be equal (citation-wise).

    By doing this, you completely remove effects of the different journal impacts (i.e. the most highly cited paper in a high-impact journal would get the same value as the most highly cited paper in a low-impact journal).

    It is possible that this is exactly what you wanted. In this case, I would not expect the correlations to be too meaningful, or rather, expect them to mean something very different from what people are normally interested in.

    Reply
  2. Lars Juhl Jensen Post author

    Thanks :)

    My goal was indeed to completely remove the effect of journal impact. The problem is that by being published in a high-impact journal, a paper automatically gets higher visibility, which translates in to more downloads, more citations, more blog posts, etc.

    The result of that is what I call “trivial correlations”. You will see a lot of measures that correlate well simply because all of them are higher for papers published in some journals than in others. These correlations thus cannot be attributed to the individual papers.

    What I am interested in are correlations that hold true for individual papers. If there is really a correlation between number of social bookmarks to a paper and the number of times it is downloaded, PLoS Biology papers with many bookmarks should be downloaded more than PLoS Biology papers with few bookmarks, and PLoS ONE papers with many bookmarks should be downloaded more than PLoS ONE papers with few bookmarks. However, if that is not the case, the correlation between bookmarks and downloads would be trivially explained by “journal effect”.

    I would argue that these are the meaningful correlations. There are certainly going to be fewer of them than if you include correlations that are trivially explained. However, I expect that the ones that remain are more likely to reflect causal relationships (for example, people bookmark papers that they themselves might want to cite later).

    Edit: Slightly rephrased to reflect the fact that normalization does not eliminate the correlation between bookmarks and downloads.

    Reply
  3. jiunit

    Lars, regarding the Open Access debate, I’m not sure how informative this plot is about that.

    Imagine an “is_open_access” binary indicator variable. All the rows in this data set have a value of 1. You’d want a second set of rows which have a value of 0 (i.e. closed access online articles) to really see whether Open Access has an effect on citations.

    Journals that have an “instant open access” fee would be the best places to get this data.

    Note: This comment was made to an earlier version of the blog post.

    Reply
  4. Lars Juhl Jensen Post author

    Regarding the Open Access debate, my point was that one should be very careful if concluding that because Open Access papers are downloaded more, they will likely be cited more (which would be an advantage to the author). However, this discussion is now a moot point. After fixing a bug in the normalization script, the correlation between citations and downloads is also found in the normalized data.

    Reply
  5. Pingback: Data, Data, Data – ScienceOnline2010 « the Undergraduate Science Librarian

  6. Pingback: Post-Publication Review: Does It Add Anything New and Useful? « The Scholarly Kitchen

Leave a comment