· John Ioannidis published a research article in PLoS Medicine entitled “Why most published research findings are false.”
· This illustrates an important point that many research studies suffer from methodological flaws that may affect the validity of the results.
· Although most researchers begin a study with a genuine intention to find the truth, if care is not taken then mistakes can happen in spite of the intentions of the researcher.
· It is therefore important to be able to appraise published research critically in order to make up one’s own mind as to the validity of the research findings.
· Research is appraised for internal and external validity first.
· Internal validity refers to appraisal of the methodological design of the study.
· If the study is found to be methodologically sound then we assess the impact of the results to assess the size of the benefit (denoted by confidence interval) and the P value (how likely is the result to be due to chance).
· The next step is to consider external validity or applicability of the study to assess whether the results are applicable to a patient in our own clinical setting or not.
· Even the best research evidence should be integrated with the available clinical expertise and our patients’ values.
· Since the gold standard for evidence related to an intervention is a RCT it is useful to understand the principles of critical appraisal of an RCT.
· One would address the internal validity of a RCT in a systematic fashion with the following questions:
o Was the objective of the trial sufficiently described?
o Was a satisfactory statement given of the diagnostic criteria for entry to the trial?
o Were concurrent controls used (as opposed to historical controls)?
o How were the patients recruited?
o Was random allocation to treatments used?
o Was allocation concealed?
o Were study groups comparable at the start?
o Were patients, caregivers and outcome assessors blinded to the treatment?
o Were the treatments well defined and both groups treated equally?
o Were outcome measures objective and standardised?
o Were outcome measures clearly defined and appropriate?
o Was a prior sample size calculation performed and reported?
o Was the duration of post-treatment follow-up stated?
o Were drop-outs minimal and comparable between the study groups?
o Were patients crossing over analysed with intention to treat principle?
o Were the side-effects of treatment reported?
o What tests were used to compare the outcome?
o Were 95% confidence intervals given for the main results?
o Were the conclusions drawn from the statistical analyses justified?