How PaperScores evaluates research quality using evidence-based criteria.
PaperScores uses advanced AI models to analyze research papers against a rigorous set of quality indicators. Our assessment framework is based on established reporting guidelines (such as CONSORT, STROBE, and PRISMA) and critical appraisal tools.
We evaluate every paper across six key dimensions to provide a comprehensive view of its quality, transparency, and reliability.
Evaluates the appropriateness of the study design for the research question. Checks for clear objectives, appropriate population selection, and robust control groups.
Assesses the statistical methods used. Looks for sample size justification, appropriate statistical tests, and correct interpretation of results (e.g., p-values, confidence intervals).
Checks adherence to reporting guidelines. Ensures all necessary data is presented clearly, including baseline characteristics, outcomes, and adverse events.
Evaluates the availability of data, code, and protocols. Checks for registration of clinical trials and clear statements about data sharing.
Assesses the generalizability of the findings. Considers the representativeness of the sample and the relevance of the setting to real-world practice.
Flags potential issues such as conflicts of interest, funding bias, and signs of misconduct. Checks against retraction databases.
Each dimension is scored on a scale, and these scores are aggregated to produce an overall quality score. We also provide a letter grade (A-F) to make the results easily interpretable at a glance.