- The intention of eLife Labs is to showcase new projects, and drive users and contributions, so this current appeal would be well-placed.
- As well as drawing in citation-scorers, and extra attention on the effort, it would be good to use the Labs post to explain the background to this project, and the benefits/limitations of your approach. For example, a formal response to Chris Hartgerink's concern re clarifiying the capabilities of this analysis (https://twitter.com/chartgerink/status/906147607021572097), though already partly done.
- the blogpost being an accessible primer for the preprint - to inform, start conversations, and help people to engage in the thinking behind this, as well as potentially encouraging people to contribute to the dataset.
---
The Problem
Can I trust research? This sounds like a fairly easy, yet important question to ask.
Important, because the public perception of research shifted and drifted apart in some remarkable areas like climate change or healthcare. Now more than ever, research has to be impartial, informing, outspoken and trustable.
Easy, it seems, because research methodology has improved over the last decades. Randomized, controlled trials, rigorous peer-review and significance-portraing p-values guarantee science's rigorous and trustworthy foundation. And this is where the problem starts: Recently, the scientific enterprise has been criticized from the inside, from researchers themselves: The current system promotes publication of biased, or even wrong results. The scale of this waste is estimated at $28B annually in the US alone , and the confusion it causes is illustrated by the fact that only 11% - 25% of landmark cancer research studies could be confirmed.
One major reason that is often blamed is the Journal Impact Factor (JIF): Researchers and universities are evaluated by the prestige of the journal they publish in. Not by quality, reproducibility or methodology. It is the popularity of the outlet they publish in. This is a problem of incentives. But: How could a metric look like that reduces publication of wrong results and increases solid science?
What is the R-factor?
The R-factor is a number from 0 to 1 that indicates how reliable is the main claim of a scientific communication and of any verifiable report in general. The R-factor is calculated by dividing the number of studies that have verified the claim by the total number of studies that attempted to do so. For example, if 10 studies tested a claim and 8 of them confirmed it, then the R-factor of the report would be 0.8(10). The R-factor of an investigator, a journal, or an institution is an average of the R-factors of the claims they reported.