The robustness or breakdown of Lotka's law about the frequency distribution of scientific productivity depends on scientific cooperation, counting methods, interdisciplinary publishing and selection methods for sample collections. We have chosen to analyse the relationship using Mandelbrot's equivalent distribution model because this model is sensitive and uses the original data (scores). Five sets of authors and publications, the two sets used by Lotka, a set from High Energy Physics, a set from Microbiology and a set based on applicants to a research programme promoting young researchers have been used. It is shown that even for a sample of authors in High-Energy Physics with extremely strong co-authorship, Mandelbrot's distribution law is robust when complete-normalized (fractional) counting is used whereas complete counting results in a breakdown. In the field of Microbiology with much weaker cooperation, both counting methods result in a breakdown of Mandelbrot's law. Today a field like Microbiology with the corresponding set of journals, probably has a large content of interdisciplinary publishing and therefore no more fulfills the precondition of Lotka's law, that the total production of the authors (sources) is considered. For a set of applicants for the Emmy Noether Programme of the German Research Foundation. Mandelbrot's law breaks down despite the fact that all publications co-authored by the applicants are taken into account. In agreement with Bayes' theorem of conditional probabilities these results lead to the conjecture that any selection process of authors and/or publications causes a breakdown of Mandelbrot's law and, as a consequence Lotka's law.
The Centre for Science and Technology Studies (CWTS), Leiden University, has developed a new ranking system entirely based on its own bibliometric indicators. This web-publication is the first in a series of rankings. The work focuses on all universities worldwide with more than 700 Web of Science indexed publications per year. This implies that the about 1000 largest (in terms of number of publications) universities in the world are covered, and that our bibliometric analysis is based on the scientific output of many hundreds of active researchers in each of these universities.
Eine Zusammenstellung verschiedener Artikel, die sich kritisch mit Bibliometrie als Evaluationsinstrument auseinander setzen. Besonders interessant: Artikel von S. Hanard, der peer review und stat. Methoden miteinander vergleicht.
Die Seite beschreibt, wie Bibliometrie an der KI genutzt werden muß von den Forschern zur Evaluation ihres Outputs.
Interessante Texte sind in der Website enthalten.
T. Frandsen, and J. Nicolaisen. Journal of the American Society for Information Science and Technology, 59 (10):
1570-1581(2008)http://hprints.org/docs/00/32/62/92/PDF/FrandsenNicolaisen.pdf.