Many people have commented to us via email about the cites-refs metric,
and the possibility that it would put pressure on people not to cite
things that they should. This is a valid worry, and I've tried to
rearrange the display so that I can display this number, and other random
metrics, without anyone thinking that they might be useful (and thus
removing these pressures.)
I'd like to be able to post other measures on this experimental page as
tests, to see what they look like on the real data, and to let others do
the same (of course I can play with them behind the scenes, but you
can't!). I hope people don't take this too seriously.
In light of the above, it could be true that cites-refs is a good measure
of something, but that, if taken seriously, it is self-defeating, and
ruins citations in literature. I think even this is not true. Reviews
should have high numbers of references and comparatively low numbers of
cites. There are other good counter-examples. The only useful thing I
can think of for this measure is to use it on a paper by paper basis,
possibly to tell something about the paper itself in conjunction with
other factors. Averaged, it loses any semblance of utility it might have
had.