When one document refers to another in it’s text, this is called a citation. The pattern of these citations is most naturally represented as a network where the nodes are the documents and the links are the citations between the documents. When documents were physical documents, printed or written on paper, then these citations must (almost always) always point back in time to older documents. This arrow of time is imprinted on these citation networks and it leads to interesting mathematical properties.
One of the most interesting features of citations is that they have been carefully curated, sometimes for hundreds of years. The data I use on the US supreme court judgments goes back to the founding of the USA. So citation data is one of the oldest and continuous ‘big data’ sets to study.
The reason why records of citations have been maintained so carefully is that they record the process of innovation, be it in patents, law or in academic study. When you try to claim a patent you must by law indicate the prior art, earlier patents with relevant (but presumably less advanced) ideas. When a judge makes a judgement in a case in the USA, they draw on earlier cases which have interpreted, and so created, the law needed to come to a conclusion in the current case. And of course, academics can’t discuss the whole of existing science when explaining their own new ideas so they have to refer back to papers where previous researchers have set out a key idea needed in the current work. Citations are therefore a vital part of knowledge transfer, new ideas build on all the work done in earlier judgments, previous patents or older papers. That is why citations have been so carefully recorded. The network formed by documents and their citations show whose giant shoulders we are standing on when innovations are made, to paraphrase Newton.
From a theoretical point of view there are many interesting features in these networks. If you follow citations from one document to another to another and so on, at each step you will always reach an older paper. So you can never come back to the starting point and such paths. There are no cycles in a citation network (it is an example of a directed acyclic graph). If you look at the number of citations each document gets, how many newer documents refer back to one document, then they follow a fat-tailed distribution – a few documents have most of these citations, while most documents have very few citations each. Derek de Solla Price’s 1965 paper for an early discussion of this feature. Moreover, if you look at documents in the same year, you get roughly the same shape for the number of documents with a given number of citations (see Radicchi et al 2008, Evans et al 2012), at least for well cited documents. Since these networks are of such great interest,many other features have been noted too.
One way for a theorist to understand what is happening is to build a model from a few simple rules which captures as many of the features as possible. One of the first was that of Derek de Solla Price (1965) whose theory of “cumulative advantage” suggested that as new documents were created they would cite existing papers in proportion to the number of citations they already had, that is the richer get richer. This follows a principle used in many other models of fat-tails in other data, and indeed was later rediscovered in the context of the number of links to modern web pages – Barabási and Albert’s preferential attachment (1999). One trouble with this simple model is that the oldest documents are always the ‘rich’ ones with the most links. In reality, each year of publication there are a few documents with many citations (relative to the average number for that year) and most have very few. The Price model does not give this as all documents published in the same year will have roughly the same number of citations. To address this problem we (Sophia Goldberg, Hannah Anthony and myself,, Goldberg et al 2015) searched for a simple model which reproduced this behaviour – fat tails for the citation data of papers published in one field and one year (Goldberg et al, 2015).
The simplest model we found works as follows. At each step we add a new document, representing the evolution in time of our citation network.
A new document first looks at recently published documents, as it is well known that citations tend to favour more recent documents. What we mean by recent is set by one of our parameters, a time scale τ.
We choose these recent documents partly at random (fraction (1-p)) and partly with cumulative attachment (fraction p) in which we pick recent papers to cite in proportion to the number of their current citations. This choice of papers is not realistic since it requires the authors to be able to choose from all recent documents while in reality authors only have a limited knowledge. However this stage is meant to capture, statistically at least, the way authors learn about recent developments: a recommendation from a colleague, a talk at a conference, scanning new editions of certain journals and so forth. Sometimes it will be essentially random, sometimes this first choice will reflect the attention papers have or will receive.
Once we have chosen these primary documents to cite, our new document then looks at the references within these primary documents.
Each paper cited in the primary paper is then cited, copied, by the new document with probability q, the third and last parameter of the model.
This ‘copying’ process is known to be a way of getting cumulative attachment with only local knowledge of the network (see for instance my paper with Jari Saramäki (Evans & Saramäki 2005) and references therein). That is you only need to read the papers you have already cited, already found, to find these secondary documents to reference. There is no need for the model to know about the whole network at this point, reflecting the limited knowledge of actual authors.
It was only when we added the last copying process that we found our model reproduced the fat-tails seen within the citations to documents published in the same year. Nothing else we tried gave a few well cited papers in every year. Comparing with one data set, taken from the hep-th section of the arXiv repository, we found that the best values for our parameters led to a typical paper in our model of hep-th made up as follows:
- Two primary papers chosen at random from recent papers.
- Two primary papers chosen in proportion to the number of their citations from recent papers.
- Eight secondary papers chosen by copying a reference from one of the first four primary papers.
This may seem like a very high level of papers being copied from the primary ones – on average we found 70% of papers were secondary citations, papers already cited in other papers being cited. One has to ask if the more recent paper, the primary one, contained all the information from the earlier ones as well as innovations being built on in the current paper. Did the new documents really derive useful information from the secondary papers cited? Often you see old ‘classic papers’ gathering citations as they are name checked in the introduction to a new paper. Not clear if the classic paper was even read while performing the current research. This feeling that some papers gain attention and acquire citations that does not reflect any direct influence on the current work is supported by at least two other studies. One was a study by Simkin and Roychowdhury (2005) of the way errors in the bibliographies of papers are copied in later papers. They suggest that this meant 80% of citations came from such copying of references. In another approach, James Clough, Tamar Loach, Jamie Gollings and myself (Clough et al, 2015) exploited the special properties of citation networks and this also suggested that 70%-80% of links were unnecessary for the logical structure of academic citation networks.
Of course constructing simple models will never capture the whole story. Models are, though, a good way to see if we have understood the key principles underlying a system.
References
- Barabási, A.-L., Albert, R. (1999). Emergence of scaling in random networks. Science, 286, 173.
- Clough, J.R., Gollings, J., Loach, T.V., Evans, T.S. (2015). Transitive reduction of citation networks. J. Complex Networks, 3, 189-203 [doi: 10.1093/comnet/cnu039, arXiv:1310.8224].
- Clough, J.R., Evans, T.S. (2014). What is the dimension of citation space? arXiv:1408.1274.
- Evans, T.S., Saramäki, J.P. (2005). Scale Free Networks from Self-Organisation. Phys.Rev.E, 72, 026138
[doi: 10.1103/PhysRevE.72.026138 , arXiv:1408.2970] - Goldberg, S., Anthony, H., Evans, T.S. (2015). Modelling Citation Networks. Scientometrics, 105, 1577-1604
[doi: 10.1007/s11192-015-1737-9 , arXiv:1408.2970 ] - Simkin, M.V., Roychowdhury, V.P. (2005). Stochastic modeling of citation slips. Scientometrics, 62, 367-384
- Price, D.J.d.S. (1965). The scientific foundations of science policy. Nature, 206, 233-238.
- Radicchi, F., Fortunato, S., Castellano, C. (2008). Universality of citation distributions: Toward an objective measure of scientific impact. PNAS 105, 17268-17272.