A session at the virtual 2021 American Academy of Pediatrics National Conference & Exhibition looks at best practices for study design.
At the virtual 2021 American Academy of Pediatrics National Conference & Exhibition, Y. Dana Neugut, MD, MS, department of pediatrics, Children’s Hospital at Montefiore in New York City shared her goals for her session on basic epidemiology for pediatricians. “At the end of this session,” Neugut said, “pediatricians should be able to describe different study designs and their respective benefits and limitations; identify 5 possible reasons for an association between 2 variables; and understand the concept of a confounder variable, and identify strategies to minimize them in a research study.”
Neugut started with describing the essence of biostatistics, “the concept of a statistical relationship between 2 variables.” In biomedical research, it is commonly between an exposure variable and an outcome variable.
In biomedical studies, the overall purpose is to investigate whether there is an association between an exposure and an end point, ie, does a specific exposure cause a specific disease? If someone has the disease, what exposure may have caused it? Does a medication reduce the risk of the particular disease better than a second medication?
Neugut then went on to describe 3 types of studies: the cohort study, the randomized controlled trial (really a subtype of the cohort study), and the case-control study.
In a cohort study, you start by taking a group of people with an exposure of interest, and a group of people without the exposure. For example, investigators might look at a group of people who received the MMR vaccine, and a group of people who did not. You then look at what happens to these 2 groups over time. A randomized controlled trial is a type of cohort study: instead of finding people who have the exposure, you assign the exposure, taking a group who consents to be exposed to something, while giving the control group (the non-exposed group) a placebo, for reasons of comparison. Similar to the cohort study, the 2 groups are followed over time as well.
Finally, in a case-control study, Neugut says, “We start in the opposite direction, recruiting patients who have the outcome of interest. For example, we find people who had the measles disease and those who never had measles. Using the MMR vaccine study as an example, we would look backwards: for those who had measles, how many had previously received the MMR vaccine, and how many had not? The same question would be asked of the group who never had measles.” The exposures and the outcome groups are then compared, measles and no measles. “This is the opposite of a cohort study,” observed Neugut, “where we compare outcomes in the 2 exposure groups, MMR and no MMR.”
There are a variety of pros and cons to these 3 types of studies. For example, a cohort study can be long and expensive, though it can study causality; a randomized controlled trial requires more planning and oversight and might have ethical concerns, but there is a comparability between exposure and control groups; and, with a case-control study, it might be more difficult to prove causality, with a bigger risk of recall bias, but is shorter and less expensive than a cohort study.
Next, Neugut went into association between 2 variables. The first would be causation, where you have variable 1 and variable 2. However, if you thought 1 variable was the exposure, which led to the outcome variable, in reverse causation, the second variable (which you thought was the outcome variable) is what, in fact, leads to the first variable. So, if, for example, you hypothesized that the price of butter in Bangladesh causes fluctuations in the S&P 500, consider that maybe the reverse is true: that the S&P 500 influences the price of butter in Bangladesh. Other explanations for associations between 2 variables include random chance, bias (which are both types of error), and confounding.
Neugut then went on to discussing how to minimize random error, such as using sample sizes that minimizes variability, or using an efficient statistical approach, and also discussed systematic error, or bias. In order to reduce systematic error you must pay attention to the 2 types of validity: internal validity, when there is consistency between your study result and the true population parameter; and external validity, when there is enough consistency between a study result in 1 population and what the study result would be in another population.
Other types of systematic error is in sampling, or volunteer bias, such as locations from which you pick participants, or a telephone survey (who is at home to receive the call), as well as recall bias, where groups of participants may not remember details with accuracy.
A good way to minimize bias, says Neugut, is making sure you have an excellent study design, such as a blind study, where the respondents do not know who is getting a medication and who is getting a placebo.
Neugut ended her discussion by explaining the concept of confounding, “an important but complex topic,” she admits. A confounding variable is associated with exposure and outcome, but not in the causal pathway between the 2. For example, with a confounding variable, one might see an inverse association between number of vaccinations and number of hospitalizations. The more vaccines a child has received, the fewer hospitalizations. Here, the number of health care maintenance visits might be the confounder: the more of these visits he has, the less the number of hospitalization. So, with this confounder, there may be an association between the number of health care visits and the number of hospitalizations. Therefore, the confounder (the health care maintenance visits) can confuse the validity of the study.
Neugut concluded her presentation with some tips to create an excellent study design, which included involving a biostatician early on; consider a bar graph instead of a pie chart, as humans are much better at discerning differences between lines (bar graph) than angles (pie chart); and be careful when you run multiple statistical tests, because without hypotheses about significant results, you increase the probability of false positives.
Reference
Neugut DY. Basic epidemiology for pediatricians. American Academy of Pediatrics 2021 National Conference & Exhibition; virtual. Accessed October 8, 2021.
Having "the talk" with teen patients
June 17th 2022A visit with a pediatric clinician is an ideal time to ensure that a teenager knows the correct information, has the opportunity to make certain contraceptive choices, and instill the knowledge that the pediatric office is a safe place to come for help.
Meet the Board: Vivian P. Hernandez-Trujillo, MD, FAAP, FAAAAI, FACAAI
May 20th 2022Contemporary Pediatrics sat down with one of our newest editorial advisory board members: Vivian P. Hernandez-Trujillo, MD, FAAP, FAAAAI, FACAAI to discuss what led to her career in medicine and what she thinks the future holds for pediatrics.