The amount and value of researchers’ peer review work is critical for academia and journal publishing. However, this labor is under-recognized, its magnitude is unknown, and alternative ways of organizing peer review labor are rarely considered. Using publicly available data, we provide an estimate of researchers’ time and the salary-based contribution to the journal peer review system. We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD. By design, our results are very likely to be under-estimates as they reflect only a portion of the total number of journals worldwide. The numbers highlight the enormous amount of work and time that researchers provide to the publication system, and the importance of considering alternative ways of structuring, and paying for, peer review. We foster this process by discussing some alternative models that aim to boost the benefits of peer review, thus improving its cost-benefit ratio.
by Timothy McAdoo Can you cite computer software in APA Style? Yes! Here’s everything you need to know. Q: Do I have to cite the computer software I mention in my paper? A: The Publication Manual specifies that a reference...
Usvoji li se taj pravilnik na najvećem hrvatskom sveučilištu poslat će se nekoliko snažnih poruka. Prvo, hrvatski znanstvenici odriču se brige za hrvatske strateške interese. Svoj rad financiran novcem hrvatskih poreznih obveznika (ili možda preciznije posudbom tuđe akumulacije) besplatno stavljaju na raspolaganje komercijalnim bazama podataka koje su započele neviđenu borbu za profitom, pa neke od njih gube sve kriterije uvrštenja časopisa u svoje baze
At eLife, we strongly support the improvement of research assessment, and the shift from journal-based metrics to an array of article (and other output) metrics and indicators. If and when eLife is awarded an impact factor, we will not promote this metric. Instead, we will continue to support a vision for research assessment that relies on a range of transparent evidence–qualitative as well as quantitative–about the specific impacts and outcomes of a collection of relevant research outputs. In this way, the concept of research impact can be expanded and enriched rather than reduced to a single number or a journal name.
With less (or ideally no) involvement of impact factors in research assessment, we believe that research communication will undergo substantial improvement. Journals can focus on scientific integrity and quality, and promote the values and services that they offer, supported by appropriate metrics as evidence of their performance. Authors can choose their preferred venue based on service, cost and reputation in their field. All constituencies will then benefit from a deeper understanding of the significance and influence of our collective investment in research, and ultimately a more effective system of research communication.
The San Francisco group agreed that the JIF, which ranks scholarly journals by the average number of citations their articles attract in a set period, has become an obsession in world science. Impact factors warp the way that research is conducted, reported, and funded. Over five months of discussion, the San Francisco declaration group moved from an “insurrection,” in the words of one publisher, against the use of the prominent two-year JIF to a wider reconsideration of scientific assessment. The DORA statement posted today makes 18 recommendations for change in the scientific culture at all levels—individual scientists, publishers, institutions, funding agencies, and the bibliometric services themselves—to reduce the dominant role of the JIF in evaluating research and researchers and instead to focus on the content of primary research papers, regardless of publication venue.
It has all been said before, of course. Research assessment “rests too heavily on the inflated status of the impact factor”, a Nature editorial noted in 2005; or as structural biologist Stephen Curry of Imperial College London put it in a recent blog post: “I am sick of impact factors and so is science”.
Even the company that creates the impact factor, Thomson Reuters, has issued advice that it does not measure the quality of an individual article in a journal, but rather correlates to the journal’s reputation in its field. (In response to DORA, Thomson Reuters notes that it’s the abuse of the JIF that is the problem, not the metric itself.)
U Pravilniku o uvjetima za izbor u znanstvena zvanja (»Narodne novine« broj 84/05, 100/06, 138/06, 120/07, 71/10 i 116/10) u članku 1. pod naslovom 6. HUMANISTIČKE ZNANOSTI u podnaslovu »OPĆA NAČELA – ELEMENTI ZA FORMULU VRIJEDNOSTI (ILI ISTOVRIJEDNOSTI) U BODOVIMA«, ispod teksta »Popis kategoriziranih domaćih časopisa koji se uzimaju u obzir pri vrednovanju radova za izbor u znanstvena zvanja iz područja humanističkih znanosti« mijenja se tekst koji sada glasi:
The DART-Europe partners help to provide researchers with a single European Portal for the discovery of Electronic Theses and Dissertations (ETDs), and they participate in advocacy to influence future European e-theses developments.
W. Booth, G. Colomb, и J. Williams. Chicago guides to writing, editing, and publishing University of Chicago Press,, Chicago ; London :, 2nd ed. издание, (2003.)Previous ed.: 1995..