Evaluating the quality of scientific evidence in research involves a systematic approach to assessing reliability, validity, and relevance. Here are key criteria that we may consider:
1. Study Design and Methodology
2. Source Credibility
3. Reproducibility and Replicability
4. Statistical Analysis
5. Consistency with Other Research
6. Potential Bias and Limitations
7. Real-World Applicability
By critically evaluating these factors, you can assess the strength and reliability of scientific evidence effectively.
‘Why is it that nobody can reproduce anybody else’s findings?’
Biomedical scientists around the world publish around 1 million new papers a year. But a staggering number of these cannot be replicated. Drastic action is needed now to clean up this mess, argues pharmacologist Csaba Szabo in Unreliable, his upcoming book on irreproducibility in the scientific literature.
“The things that we’ve tried are not working,” says Szabo, a professor at the University of Fribourg. “I’m trying to suggest something different.”
In the book, Szabo argues that there is no quick fix and that incremental efforts such as educational workshops and checklists have made little difference. He says the whole system has to be overhauled with new scientific training programs, different ways of allocating funding, novel publication systems, and more criminal charges for fraudsters.
“We need to figure out how to reduce the waste and make science more effective,” Szabo says..."
Among 143 replications of 56 experiments, less than half of biomedical research from Brazil was replicable, researchers found!!!
Estimating the replicability of Brazilian biomedical science
"Results highlight factors that limit the replicability of results published by researchers in Brazil and suggest ways by which this scenario can be improved..."
Open science scholarly knowledge graphs can advance research assessment reform
Paolo Manghi examines the potential and challenges of open science knowledge graphs in reforming research assessment...
"The wide range of research products (some openly accessible and reusable) is undeniably beneficial for scientific progress. These changes underscore also the need for a radical reform of research assessment, as proposed by CoARA, which emphasises the importance of paying more attention to diverse research career paths and developing innovative forms of assessment and indicators that reflect the full diversity of scholarly work.
In this respect, scholarly knowledge graphs like Scopus, Web of Science, and Google Scholar have been pivotal in providing indicators widely used (and criticized) for research assessment in the traditional domain of peer-reviewed scientific publications. Despite their opaque and closed nature, they aggregate metadata from publishing venues to offer structured maps of publications, authors, institutions, and their interconnections, such as citations, co-authorships, and affiliations, regarded as pivotal to evaluating (and foreseeing) scientific impact...
The OpenAIRE Graph seeks to fulfill this vision, overcoming many of the aforementioned drawbacks, and offering an open citation index that captures the global landscape of Open Science while embodying the Principles of Open Scholarly Infrastructures (POSI) and the emerging principles of CoARA WG on OI4RRA.
Open Science knowledge graphs are dynamic tools, imperfect yet essential, that offer invaluable insights into existing publishing practices and are key in steering all actors towards greater alignment with Open Science principles. As a community, our engagement with them is crucial for safeguarding and shaping the future of Open Science research assessment."