Critical Reading of Epidemiological Papers. A Guide.
How to assess epidemiological studies
Abstract
Assessing the quality of an epidemiological study equates to assessing whether the inferences drawn from it are warranted when account is taken of the methods, the representativeness of the written report sample, and the nature of the population from which information technology is fatigued. Bias, misreckoning, and chance tin can threaten the quality of an epidemiological study at all its phases. Nevertheless, their presence does not necessarily imply that a study should be disregarded. The reader must first residuum any of these threats or missing information with their potential impact on the conclusions of the report.
- epidemiological studies
Statistics from Altmetric.com
- epidemiological studies
Epidemiology underpins skillful clinical research. It is whatsoever inquiry with a defined numerator, which describes, quantifies, and postulates causal mechanisms for health phenomena.1 Epidemiology gives insight into the natural history and causes of disease and can provide evidence to aid preclude occurrence of affliction. It promotes effective treatments either to cure or to prolong the lives of those with affliction. Epidemiology, also referred to as "population medicine", is used to guess the private risk of disease and the chances of avoiding it from grouping experience averages. Such information is crucial to planning interventions and allocating resource.
The epidemiological approach needs to be applied to clinical research to evaluate both its effectiveness and its importance. Hence clinicians need to gain the skills that will permit them to properly update and re-evaluate their knowledge and thus provide the all-time bear witness based patient care. Epidemiology is an interdisciplinary field that draws its techniques and methodologies from biostatistics, social sciences, and clinical medicine likewise as from a vast range of biological sciences such as genetics, toxicology, and pathologytwo and for this reason the interpretation of epidemiological studies is not e'er easy.
There are several reviews and books available that provide advice on how best to assess epidemiological studies. The favoured outline for these is by list types of common errors. This review provides an alternative approach that information technology is hoped volition be helpful. Later briefly characterising the principal threats to the quality of epidemiological studies, a map is provided to assess studies based on their usual format—that is, the design, conduct, and analysis of the results. Readers of epidemiology papers at any level will exist assisted in their task by Last's A Dictionary of Epidemiology, an essential guide to all.1 Assessing the quality of epidemiological studies equates to assessing their validity.
CONCEPT OF VALIDITY AND ITS THREATS
According to Last, validity is the "degree to which the inference drawn from a written report is warranted when business relationship is taken of the study methods, the representativeness of the written report sample, and the nature of the population from which information technology is drawn".one The concept of validity was further developed in the 1950s past Campbell when he introduced the distinction between external and internal validity3:
-
Internal validity is the extent to which systematic error is minimised during all stages of data collection.
-
External validity is the extent to which results of trials provide a correct basis for generalisation to other circumstances; this is regarding patients, handling regimens, setting, modalities of outcome, which include definition of outcomes and duration of follow upwardly.
Every step in a written report should be undertaken in such a mode as to maximise its validity. There are iii threats to validity: bias, confounding, and chance.
Bias
Bias is a systematic error. Sackett has listed dozens of biases that can distort the estimation of an epidemiological measure.4 The stardom among these is occasionally difficult to discern only there are two general types of bias that should be remembered: selection bias and information bias. Option bias is mistake due to systematic differences in characteristics between those who accept part in a study and those who do non. Information bias, too called measurement bias, is systematic mistake arising from inaccurate measurement (or classification) of subjects on study variable(s). Measurement bias tin can ascend from the pick of tools 1 uses to measure likewise as the assessor's attitude and the cooperation of the participant, if it is a human based study.
Bias in studies does not necessarily mean that they get scientifically unacceptable and should be disregarded. A showtime step must be to assess the probable bear on of the described biases on study results5—that is, the direction in which each bias is likely to affect event, and its magnitude. The magnitude should not be and then great that the results are changed to make the relationship stronger or weaker than that observed. Unfortunately, there is no simple formula for assessing biases: each must be considered on its own merit in the context of the study population.
Box 1: Definition
-
Epidemiology is a science, which describes, quantifies, and postulates causal mechanisms for wellness phenomena in a population.
-
The epidemiological approach needs to be practical to clinical research to evaluate constructive research.
Confounding
Confounding is a type of bias but information technology is often considered as its ain entity. According to Terminal1:
"Misreckoning bias is a distortion of the estimated consequence of an exposure on an outcome, caused by the presence of an inapplicable factor associated both with the exposure and the outcome, that is, confounding is caused by a variable that is a risk gene for the consequence amid non-exposed persons and is associated with the exposure of interest, but is not an intermediate step in the control pathway between exposure and issue".
Confounding is illustrated in fig i. Another way of viewing misreckoning is as a defoliation of effects.vi The baloney introduced past a misreckoning factor tin exist large and it can lead to overestimation or underestimation of an result, depending on the management of the associations that the misreckoning factor has with exposure and affliction. Confounding can even change the apparent direction of an effect.6
Methods to forestall confounding include randomisation, restriction, and matching. Random resource allotment, not to exist confused with haphazard assignment, can be used in trials. It follows a predetermined program and aims, inside the limits of take chances variation, to make the control and experimental groups like at the start of an investigation, thus minimising any unbalanced relationship between known and unknown confounders and other studied variables.1 This is because confounding cannot occur if potential misreckoning factors practise not vary across groups.7 In a similar way, restriction and matching also effort to brand the report group and comparison group comparable with respect to extraneous factors simply this fourth dimension by specifically selecting subjects according to their "confounder-bearing" status.i For example, continuing with the example above, the study groups could exist chosen in such a way as to include only non-smokers or simply smokers.6 Confounding tin can also be adjusted for during the statistical analysis phase of the study with stratified analysis and multivariate analysis techniques. Stratification is a technique that involves the evaluation of clan betwixt the exposure and affliction within homogeneous categories or strata of the confounding variable. The results from the above written report can exist analysed according to smoking history: never smoker, ex-smoker 10+ years, ex-smoker <10 years, current smoker.7 Multivariate analysis involves the construction of a mathematical model and allows for the efficient estimation of measures of association while controlling for a number of confounding factors simultaneously, even in situations where stratification would fail because of bereft numbers.seven
The reader tin thus appraise misreckoning by considering whether whatever of import factors have not been taken into account in the pattern and/or analysis phase of a study, based on an agreement of the natural history of a disease.
Chance
Inevitably, because studies cannot include entire populations and continue indefinitely in time, some take chances factor may result in report outcomes non representing the ultimate true values, even if bias and misreckoning are non-existent. Investigators adjust for the risk factor using statistics in the analysis phase of the study. However, variations from the true values will exist minimised the larger and/or the longer the study.
ASSESSING THE Design AND Deport OF AN EPIDEMIOLOGICAL STUDY
Choice of written report design
Which study design was chosen and was it appropriate?
Researchers have a option of several report designs for their investigation and a judgment must be made every bit to whether their pick is reasonable in relation to the question they wish to consider. Table 1 lists epidemiological study designs and specific goals these can help achieve.
View this table:
- View inline
Table i
Description of epidemiological study designs (adjusted from Detelsviii)
The more appropriate the study blueprint, the more than disarming the prove that will be produced. Conclusions from a case-control study assessing the efficacy of a surgical procedure will be stronger than that of a observational cohort study and volition be weaker than that of a well conducted randomised controlled trial.
The reader must beware not to accept what the written report claims to be without going through the description of its pattern. Particularly interventional studies that are described as randomised controlled trials exercise not always stand up to careful scrutiny. This may be because there is pressure level to overclaim the blueprint of a study considered to be the gold standard in epidemiological investigations, which is hard to bear in a valid style.
Choice of study population
Has the population been sufficiently described?
It is important that researchers written report the sociodemographic characteristics of the study population to let readers to come across the possibilities of generalisation to other populations. Furthermore, it allows physicians to judge whether they can apply the results to item patients.9 In some instances of case-control studies and trials, the description of the group allows assessment for selection bias—that is, differences in the two groups at baseline, which may account for effects observed in the analysis phase. This assessment must also be washed even in randomised trials where systematic bias is eliminated. Randomisation does not necessarily produce perfectly balanced groups with respect to prognostic factors and differences due to chance remain in the intervention groups. Cess of selection bias is crucial and if identified will need to be controlled for during the assay phase, although in some examples this volition not be possible.
Box 2: Threats to validity
-
Bias, confounding, and chance tin distort the results of epidemiological studies.
What is the source population?
The source of the population is known to have an impact on the conclusions of a study. For example, selection bias introduced by referral of patients from care centres can impact profoundly the results of clinical and epidemiological studies.four This is considering referral is influenced by more than than the severity of the disorder itself and has much to do with the way that communities contain and deal with aberrant behaviour.10 Referral may differ according to burden of symptoms, access to care, popularity of disorders and institutions. Another case is with participants recruited via the media. Those who volunteer to participate are likely to differ from not-participants in a number of of import ways, including basic levels of motivation and attitudes towards health. As a farther example, the readers should estimate whether recruiting via telephone or door knocking or whether incentives were given to take part in a written report volition affect the final results, and if so, in what direction.
How were the participants selected?
The inclusion and exclusion criteria for subject participation and the means in which they were applied must be clearly defined. This is to testify minimum tampering of subject participation by researchers. A mutual error is defining studies as population based. However, equally long as participants have non been recruited from all subgroups of a population, one cannot consider the report to exist community based. For example, solely recruiting from health registries would only exist acceptable in a land where health care is universal and complimentary. Another important source of effect on the effect is whether subclinical cases have been included. The readers must e'er consider how non-included members may affect the results of the written report.
Have the investigators strived for high participation rates?
Investigators must strive for high participation rates. If, for example, the researcher contacts an initial target population and manages to recruit 65% to take role in his/her study, 1 must assess whether these 65% are representative of the initial population. In add-on, the investigator must assess whether the numbers recruited are large enough to make statistically feasible conclusions. Equally mentioned earlier, the role of run a risk on results can be minimised and the generalisability tin can be maximised in larger and/or longer trials.
Has attrition been high enough to change the main characteristics of the report and control groups?
In the same manner, any attrition or loss to follow up should be reported with an effort to explain what differences this makes to conclusions.
Has there been any participant exclusion after recruitment?
Exclusion numbers should be reported. Exclusion is acceptable if written report personnel fabricated errors in the implementation of eligibility criteria or if patients never received the intervention in an experimental report.11 Yet, in no circumstance should exclusion be accepted if it appears to be dependent on the treatment given. In trials, post-randomisation exclusion acceptability really depends on whether the goal of report is to address an explanatory (efficacy) or direction (effectiveness) question.xi Not excluding participants who did not follow their intended handling volition allow an respond to an effectiveness investigation on an intention to treat basis. Only 13% of all randomised trials published in the New Zealand Medical Periodical between 1943 and 1995 provided evidence that terminal analyses were conducted on an intention to treat ground.12 Investigators should clearly state the number of patients recruited but not included in the master analysis of information and explain the circumstances nether which such patients were enrolled only excluded from the analysis.
Which comparison group?
Any differences between the exposed and control group during the report should be assessed in relation to their potential outcome on outcomes observed. Unless this is done fairly, any analysis will be dangerously misleading.
Some investigators feel that the closer the identity of the compared groups with respect to all measurable factors, the greater the validity, since some factors may affect disease incidence without the investigator's sensation.six Matching unexposed to exposed subjects in cohort studies can preclude confounding of the rough risk difference and ratio because such matching prevents the association betwixt exposure and the matching factors among the written report subjects at the start of the follow up.6 Matching in cohort studies though is rarely done. In practice much of the controlling in accomplice studies occurs in the analysis stage where complex statistical adjustment is fabricated for baseline differences in cardinal variables. Matching in example-control studies may introduce bias and thus matching on a factor may withal necessitate its control in the assay phase.6 If controls are selected to friction match cases on a cistron that is correlated with the exposure, than the crude exposure frequency in controls will be distorted in the direction of similarity to that of the cases, creating a risk of over matching.
The choice of comparison groups tin can likewise innovate error in experimental studies. For example in a meta-analysis showing that research sponsored past the drug industry was more likely to produce results favouring the product made by the company sponsoring the research than studies funded past other sources,13 it was shown that this might exist due to inappropriate comparators or publication bias rather than the reported quality of methods. It was establish that in trials of psychiatric drugs, the comparator drug is frequently given in doses outside the usual range. Similarly, research funded past the company marketing fluconazole compared it with the oral amphotericin B, a drug known to be poorly captivated, thereby creating a bias in favour of fluconazole.13
Often the comparing is a placebo controlled group, meaning that the control participants were given an inert medication or procedure that is intended to requite them the perception that they are receiving treatment for their complaint.1 This is idea to control for the power of suggestion by a medical adviser. Hrobjartsson and Gotzsche investigated patient reported and observer outcomes and establish no evidence that placebo interventions in full general take clinically important effects, except perchance on subjective continuous outcomes, such every bit pain, where the consequence could not be clearly distinguished from bias.fourteen The placebo consequence can thus aid compare the validity of the methods of investigation in experimental studies. In a review of trials looking at the treatment of irritable bowel syndrome (IBS), the placebo response was extremely variable and high, most ofttimes betwixt 40% and 70%.15 Differences of this magnitude reflect not simply the nature of the patients enrolled in a trial simply also the methods used to determine treatment response. Information technology is a useful manner to compare methods and results across studies.
If necessary, has the method of randomisation and allocation concealment been reported?
The non-reporting of the method of randomisation and allotment concealment is one of the master errors in articles reporting randomised trials. For example, a review reported that the mechanism used to allocate interventions was omitted in reports of 93% of trials in dermatology, 89% of trials in rheumatoid arthritis, 48% of trials in obstetrics and gynaecology journals, and 45% of trials in general medical journals.9 Unless stated clearly in the newspaper, one cannot be assured that randomisation was correctly done. Right randomisation is dependent on proper allocation concealment—that is, random resource allotment without foreknowledge of handling assignments. Methods of darkening include sequentially numbered, opaque, sealed envelopes or containers, tin exist pharmacy controlled, or completed past central randomisation. However, each may not be sufficient. Elements convincing of concealment must be reported in the written report paper. This is crucial every bit results of four empirical investigations reported past Schulz and Grimes have shown that trials that used inadequate or unclear resource allotment concealment compared with those that used adequate darkening, yielded upward to xl% larger estimates of consequence.9
Choice of exposure and outcome measures
One major source of error in studies, especially in cohorts, is in the degree of accurateness with which respondents have been classified with respect to their exposure and disease status—that is, measurement bias. Choosing what and how measurements will be collected, whether it be exposure, outcome and other auxiliary variables, determines the validity of the report. If the mis-measurement is random, the misclassification of a dichotomous exposure is ever in the direction of the null value. Although information technology is generally considered adequate to underestimate effects rather than overestimate them, this blazon of error may account for some discrepancies amidst studies.
Has potential bias from the choice of tools for data collection been dealt with?
Two types of data tin can be used for epidemiological studies: routine information and data which have been nerveless specifically for the study. Creating new knowledge versus using routine data has a neat impact on whatever study. Routine data take the advantage of being collected independently of the study and thus an automated blinding of assessors is in place. Notwithstanding, routine data are often incomplete and not necessarily advisable for answering the report question.
There are many tools for collecting information. These include open grouping discussions, self rating, direct examination interviews, and biological mark measurement. Data should exist nerveless in every bit objective, reliable, accurate, and reproducible fashion as possible. Different data collection methods are prone to different errors of measurement. Hence the utilise of well recognised standards or validated tools is a positive point. Validity here is an expression of the degree to which a measurement measures what it purports to measure.one Validated questionnaires are especially useful while trying to measure symptomatic effects (such as pain), functional effects (mobility), psychological effects (anxiety), or social effects (inconvenience) of an intervention16 as these variables are particularly subjective.
The choice of measurement tools invariably affects results and the readers must understand the impact of this option. For example, while looking at treatment of IBS, what differences in instance definition could exist expected from the use of the Manning criteria or using defined i, two, or 3 symptoms of IBS every bit entry criteria? Although the Manning criteria are still used, a report studying the diagnostic value of the criteria found it to be considerably more reliable for the diagnosis of IBS in women than in men.17 The reader should judge whether this sex bias in case definition could take significantly changed the event of the study. Many atmospheric condition are complex and clinical or inquiry criteria require the presence of particular symptoms and signs, each of which is associated with the need for an operational decision. Unfortunately, availability of gold standards is an issue for many disorders.
Has plenty or also much data been nerveless?
Correct example classification can involve varying effort. For case, the clinical diagnosis of Alzheimer'due south disease is one of exclusion. Cerebrospinal fluid and blood analyses and imaging are used to differentiate Alzheimer's illness from other illnesses that may cause the same clinical symptoms. Possibly, the more than the tests carried out, the less likely a participant would be classified equally having Alzheimer'south disease.
How long have the participants been followed up?
Contestably, many trials are based on limited follow up merely are applied as long term therapy. Timing is important. This is particularly so in the investigation of effects of treatment of chronic conditions such as Crohn'southward disease, which has unpredictable periods of exacerbation and remission. Participants should be followed upward for a reasonably realistic time period to establish whether a treatment is effective. In a similar fashion, research on the potential increase in temporal lobe encephalon tumours among mobile phone users needs to let for several years later on the beginning of exposure before measuring whether electromagnetic fields can have an effect.
Has potential bias from observers been dealt with?
The use of standardised questionnaires or laboratory protocols does not always foreclose observer variation. Discrepancies between repeated observations by the same observer and betwixt different observers are to be expected.one This variation is measured by the kappa factor, which allows for chance clan. Reporting of kappa values shows a will for validity by the investigators. The college the factor, the college the cyclopedia is between measurements. Negative kappa values may be due to faulty techniques or incorrect recording of the results. Misinterpretation of data can be due to the pre-judgment and expectancy of what results should be. This highlights the importance of "blinding" the measurer to the probable caseness of the measured subject and of observing quality controls in advisedly agreed protocols. Although often considered costless of bias, molecular work is not immune to measurement bias. For example, while comparing tangle determination in the CERAD protocol for neuropathologically diagnosing Alzheimer's disease, Mirra and co-workers found that merely 66% of raters from fifteen laboratories showed internal consistency.18 It is difficult to appraise whether depression inter-rater/intrarater reliability can have an outcome other than random on the results. However, a minimum aim is to written report this reliability for readers to assess the validity of the results.
Has potential bias from the participants been dealt with?
Bias can result from inaccurate reporting by participants. This is specially and then in case-control studies equally the data on exposure is ofttimes provided past the participant after the onset of disease. Recall bias tin occur when cases differ with respect to their exposure response due to the disease experience relative to controls. For instance, those who have suffered from food poisoning may remember their meals differently from those who did non suffer similarly. The utilize of retentivity aids tin can aid reduce recall bias.
At that place are too circumstances where participants perceive social pressures to report fittingly. This is particularly so when dealing with self written report on drinking,19 smoking,20, 21 drug taking, and sexual habits.22 For example, the reader should judge how self report over the telephone to monitor overweight and obesity in populations can be affected past social desirability. It was institute that body mass index, based on measured weights and heights, classified 62% of males and 47% of females as overweight or obese, compared with 39% and 32% respectively from self report.23 Blinding of the participants to study goals and participants' nomenclature status to any interests may help.
ASSESSING THE ANALYSIS PHASE OF AN EPIDEMIOLOGICAL Written report
Statistical analysis versus biological interpretation
Most epidemiological studies results are analysed using formal statistics. The type of statistical test that should be used is determined by the goal of the analysis (for example, to compare groups, to explore an association, or to predict an outcome) and the types of variables used in the assay (for example, categorical, ordinal, or continuous variables).24 The statistical results are often presented with a p value, which is the probability of obtaining an consequence in the study sample every bit extreme from the null hypothesis as that observed, simply by take chances, just more than often with a indicate estimate and confidence intervals, a range within which, bold in that location is no bias in the study method, the two values for the population parameter might exist expected to prevarication.5 Confidence intervals are more useful to consider than p values when assessing whether results are significant as they reflect both the degree of variability in the factor existence investigated and the limited size of the study: the wider the conviction intervals, the less powerful the study is.
Box 3: Study design and conduct
-
The main aspects of the written report design and conduct that demand to be assessed include: choice of written report pattern, choice of study population, and the choice of exposure and consequence measures.
Frequently, a p value under or equal to a probability of 1 in 20 or 0.05 is considered statistically pregnant, however, significance does not mean that the results brand biological sense. Results can be statistically significant without being biologically/sociologically pregnant. For case, a very large clinical trial can provide a significant result on the effect of a specific drug that increases the concentration of haemoglobin by 1 g/100 ml blood. The readers should consider whether this is plausible and whether this can accept a useful medical upshot.
Oft the demand for large sample sizes to achieve sufficient power and thus precision to answer study hypotheses can pb to combination of broad categories of cases. This tin cause heterogeneity in the cases groups, which can exist inappropriate.25 This happens in cohort studies and the result is that it tin obscure effects on more narrowly divers diseases. However, such non-differential misclassification of exposure, even if substantial, only underestimates associations, provided that the misclassification probabilities use uniformly to all subjects.vi
Take the results shown a cause-upshot relationship?
Showing that an exposure is strongly associated with a disease does non necessarily imply that there is a cause-effect relationship. Loma described a serial of conditions, which if completed will prove a cause-effect relationship.26 These are:
-
A sufficient strength of association.
-
A temporal relationship betwixt exposure and event.
-
A dose-response relationship.
-
Consistency.
-
Biological plausibility.
-
Coherence.
-
Specificity.
-
Analogy.
Temporality is particularly difficult to demonstrate in case-control studies where all data are nerveless at once. Table 2 shows these criteria illustrated using the cause effect relationship described for the human papillomavirus and cervical cancer.27
View this table:
- View inline
Table 2
Instance of the crusade effect relationship with the man papillomavirus (HPV) and cervical cancer (adapted from Bosch et al 27)
What are the policy implications?
The terminal phase of assessing epidemiological studies is determining whether information technology has whatsoever policy implications. Although a consistency and magnitude of effect tin be demonstrated, the touch of any intervention must also be considered. This is also known as the generalisability of the results and is directly dependent on the written report participants' characteristics.
To assess the impact of an intervention, the reader should also think in terms of attributable adventure rather than relative chance. Attributable risk is the proportion of a disease or other outcome in exposed individuals that can be attributed to the exposure. This measure is derived past subtracting the rate of the outcome (usually incidence or mortality) among the unexposed from the rate among the exposed individuals.one Information technology is assumed that causes other than the one under investigation have had equal effects on the exposed and unexposed groups. This is unlike to the relative risk, which is the ratio of the risk of disease or death amongst the exposed to the risk among the unexposed.one The relative take a chance provides data that can be used in making a judgment of causality. Nonetheless, once causality is assumed, from the perspective of public wellness policy making, measures of association based on accented differences in risk betwixt exposed and non-exposed individuals presume far greater importance. This is illustrated with the example in table 3.
View this table:
- View inline
Table 3
Relative and attributable risks of bloodshed from lung cancer and coronary center disease among cigarette smokers in a cohort study of British male physicians (adapted from Doll and Peto28)
Box four: Aspects of report assay to be assessed
-
Aspects of the study assay phase which demand to be assessed are the statistical and biological estimation of the results, the generalisability of the findings, and whether they show a crusade-consequence relationship between the factors nether investigation.
Box 5: Residue of threats and affect of conclusions
-
The reader must balance any threat described regarding the quality of the written report and whatever missing information with their potential impact on the conclusions of the report.
Determination
There are many subjective elements to the interpretation of epidemiological studies; however, minimum standards in the conduct of a study ensure that whatsoever conclusion reached is appropriate. The reader must deport in mind that assessing an epidemiological study non simply implies knowing how to look for key data in its paper but besides in its "comments" and "corrections". These are listed along with the paper reference in Medline. Bias, misreckoning, and chance tin threaten the validity of a study at all its stages. Thus, the methodology must exist well thought-out and this must be reflected in the study paper. It is understood that all the details about choices made by investigators cannot be published; nevertheless, the printed information should provide sufficient details and so equally to rule out alternative interpretations of the results. Investigators must show that they planned to minimise bias and account for confounding while also describing statistical methods. More importantly though, they must report any potential bear upon of limitations on the results found. Many reviewers when assessing study validity have a "guilty until proved innocent approach", where one assumes that the quality is inadequate unless the information to the reverse is provided in the text.3 This can be a dangerous tactic and may exclude many valid studies. The reader should take the same approach as described for dealing with potential bias and confounding and rest whatsoever missing information with its potential impact on the conclusions of the report.
Box half dozen: Key reading
-
Terminal JM. A dictionary of epidemiology. 4th Ed. Oxford: Oxford Academy Printing, 2001.
-
Greenberg RS, et al. Medical epidemiology. 3rd Ed. Lange Editions, 2001 (chapter 13).
-
Coggan D, Rose Chiliad, Barker DJP. Epidemiology for the uniniated. fourth Ed. London: BMJ Publishing Group, 1997 (an excellent concise introduction).
-
Bhopal R. Concepts of epidemiology: an integrated introduction to the ideas, theories, principles and methods of epidemiology. Oxford: Oxford University Press, 2002 (comprehensive and upwards-to-engagement).
-
Hennekens CH, Buring JE. Epidemiology in medicine. Little, Brown, 1987 (good introduction to medical statistics).
Questions (true (T)/false (F); answers at end of references)
-
Confounding occurs when an exposure causes its event through a second exposure.
-
Potential for selection and recall bias is a item trouble in cohort studies as opposed to other analytic designs because both exposure and disease have already occurred at the time data on written report subjects is obtained.
-
The results of an investigation carried out on volunteer participants can exist expected to the same as those from participants chosen from instance registries.
-
Risk is another term for odds ratio.
-
Matching should be used to control for selection bias in epidemiological studies.
ANSWERS
1. T; 2. F (it is a particular problem in case-control studies); 3. F; four. F; 5. F (matching is used to control for confounding).
REFERENCES
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
Request Permissions
If you wish to reuse any or all of this commodity delight use the link below which will have you lot to the Copyright Clearance Center's RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different means.
Copyright information:
Copyright 2004 The Fellowship of Postgraduate Medicine
Source: https://pmj.bmj.com/content/80/941/140
Postar um comentário for "Critical Reading of Epidemiological Papers. A Guide."