Presentation is loading. Please wait.

Presentation is loading. Please wait.

Critical Appraisal Or Making Reading More Worthwhile.

Similar presentations


Presentation on theme: "Critical Appraisal Or Making Reading More Worthwhile."— Presentation transcript:

1 Critical Appraisal Or Making Reading More Worthwhile

2 August 2001Bruce Davies2 The Problem Vast and expanding literature. Limited time to read. Different reasons to read – mean different strategies. Keeping up to date. Answering specific clinical questions. Pursuing a research interest.

3 August 2001Bruce Davies3 Stages Clarify your reasons for reading. Specify your information need. Identify relevant literature. Critically appraise what you read.

4 August 2001Bruce Davies4 Clarify Your Reasons for Reading. Keeping up to date. Skimming the main journals and summary bulletins. Answering specific clinical questions. Finding good quality literature on subject. Pursuing a research interest. Extensive literature searching.

5 August 2001Bruce Davies5 Specify Your Information Need. What kind of reports do I want? How much detail do I need? How comprehensive do I need to be? How far back should I search? The answers to these questions should flow from the reasons for reading.

6 August 2001Bruce Davies6 Identify Relevant Literature. There are many ways of finding literature. Remember to ask a librarian – they are the experts. Selectivity is the key to successful critical appraisal.

7 August 2001Bruce Davies7 Critically Appraise What You Read. Separating the wheat from the chaff. Time is limited – you should aim to quickly stop reading the dross. Others contain useful information mixed with rubbish. Simple checklists enable the useful information to be identified.

8 August 2001Bruce Davies8 Questions to Ask Is it of interest? Why was it done? How was it done? What has been found? What are the implications? What else is of interest?

9 August 2001Bruce Davies9 Questions to Ask Is it of interest? Title, abstract, source. Why was it done? Introduction. Should end with a clear statement of the purpose of the study. The absence of such a statement can imply that the authors had no clear idea of what they were trying to find out. Or they didnt find anything but wanted to publish!

10 August 2001Bruce Davies10 Questions to Ask How was it done? Methods. Brief but should include enough detail to enable one to judge quality. Must include who was studied and how they were recruited. Basic demographics must be there. An important guide to the quality of the paper.

11 August 2001Bruce Davies11 Questions to Ask What has it found? Results. The data should be there – not just statistics. Are the aims in the introduction addressed in the results? Look for illogical sequences, bland statements of results. ? Flaws and inconsistencies. All research has some flaws – this is not nit picking, the impact of the flaws need to assessed.

12 August 2001Bruce Davies12 Questions to Ask What are the implications? Abstract / discussion. The whole use of research is how far the results can be generalised. All authors will tend to think their work is more important than the rest of us! What is new here? What does it mean for health care? Is it relevant to my patients?

13 August 2001Bruce Davies13 Questions to Ask What else is of interest? Introduction / discussion. Useful references? Important or novel ideas? Even if the results are discounted it doesnt mean there is nothing of value.

14 August 2001Bruce Davies14 NOT JUST A FAULT FINDING EXERCISE !

15 August 2001Bruce Davies15 What Is the Method? The first task – alternative check lists for different methods. How was the study conducted to confirm the method? Authors sometimes use the wrong words to describe their work!

16 August 2001Bruce Davies16 Surveys Describe how things are now. Samples of populations or special groups. The samples must be randomly selected.

17 August 2001Bruce Davies17 Surveys Do not have separate control or comparison groups. Comparisons may be made between subgroups – but this is not control. Use of the term survey should identify the method – beware the use in what is really a cohort study.

18 August 2001Bruce Davies18 Surveys Cross-sectional is seldom used to describe other methods. There are many ways of selecting a sample: e.g: stratified, cluster and systematic.

19 August 2001Bruce Davies19 Cohort Studies Used to find out what happens to patients. A group is identified and then watched to see what events befall them. May have comparison or control groups – who must be identified from the start. Not an essential feature tho.

20 August 2001Bruce Davies20 Cohort Studies Must have the element of time flowing forward, from the point at which they are identified. This is sometimes called a retrospective cohort study. The term cohort should be diagnostic.

21 August 2001Bruce Davies21 Clinical Trials Testing. Always concerned with effectiveness. The focus should always be on the outcome. The outcomes may not be beneficial – in other words side effect trials. Sometimes cohort trials are used to assess effectiveness. This is very poor research and can usually be dismissed.

22 August 2001Bruce Davies22 Clinical Trials When more than two things are compared it makes the study more complex and harder to get right. The key words to look for are: random allocation, double blind, single blind, placebo-controlled. The term outcome is sometimes used in cohort studies as well.

23 August 2001Bruce Davies23 Case-control Studies Ask what makes groups of patients different. Select a set of patients with a characteristic – eg a disease. The characteristics of this set are then compared with a control group who do not have the characteristic being studied – but all all other respects must be as identical as possible.

24 August 2001Bruce Davies24 Case-control Studies Case studies look backward – not forward as cohort studies do. Other terms used are: case-referrent, case- comparator, case-comparison. May also be called retrospective – as is used for some cohort studies.

25 August 2001Bruce Davies25 The Results The major mental challenge. What do I think this really means? CAUTION. Large unexpected results are rare. Flawed studies and misleading findings are common.

26 August 2001Bruce Davies26 Statistics A subject in itself. Some general thoughts are worth emphasising. Size matters.

27 August 2001Bruce Davies27 Probability Is just that. It is not proof. Think of horse racing and the lottery. Think of the odds.

28 August 2001Bruce Davies28 Pitfalls All statistical tests make assumptions about the raw data. If there is no raw data presented you cannot know if the tests are meaningful. Outliers.

29 August 2001Bruce Davies29 Pitfalls Skew. Non-independence. Serendipity masquerading as hypothesis – or data trawling!

30 August 2001Bruce Davies30 Pitfalls Black box analyses. Modern computers make statistical testing easy – the authors may not know what they are doing! Bias – play devils advocate. Confounding. A very common problem in medicine. Colour televisions do not cause increases in hypertension.

31 August 2001Bruce Davies31 Checklists Checklists for particular types of literature are a quick and easy way of learning critical appraisal. They all have 3 stages: Basic questions. Essential appraisal. Detailed appraisal.

32 August 2001Bruce Davies32 BUT, BUT Checklists do not tell you about the quality or usefulness. This is still a subjective question. All the lists do is enable a more structured and thoughtful response to the subjective question.

33 August 2001Bruce Davies33 References The Pocket guide to critical appraisal. By: Iain Crombie. Pub BMJ Books, 1996. How to lie with statistics. By: Darrell Huff. Pub: Pelican, 1989.


Download ppt "Critical Appraisal Or Making Reading More Worthwhile."

Similar presentations


Ads by Google