Big Problems Undercut Research Findings

May 21st, 2014 by MorganDowney Leave a reply »

Research is the key to understanding obesity and developing accurate and effective treatments, prevention approaches and public policies. However, researchers have found significant problems in observational studies  and the interpretation of research results.

One study has found that statistical results in psychological studies are biased toward researcher’s own expectations and authors often dismiss data inconsistent with their own hypothesis. Bakker and Wicherts found that, of 281 articles examined, around 18% of the statistical results are incorrectly reported and around 15% contained at least one statistical conclusion that proved to be incorrect. Errors were often in line with researchers’ expectations. Errors were higher in journals with low impact factors.

A study published in Radiology by Ochodo and colleagues examined the diagnostic accuracy in studies found errors in journals with high impact factors. Errors included over-interpretation, overly optimistic abstract, discrepancy between study aim and conclusion, conclusions based on selected subgroups, failure to include a sample size calculation, failure to state a test hypothesis as well as failure to report confidence Intervals.

In a study published in Statistics in Medicine in January 2014, Schuemie and colleagues found that a majority of observational studies would declare statistical significance when no effect is present. At least 54% of findings with p <0.05 are not actually statistically significant.

Closer to home, Jayawardene and colleagues found significant discrepancies in self-reported height and weight among adolescents in the National Youth Physical Activity and Nutrition Study, 2010. Underweight students under-reported height and over-reported weight while overweight and obese students over-reported height and under-reported weight. Weight loss behaviors, both healthy and unhealthy were associated with BMI underestimation while fast food consumption and screen time were associated with overestimation. These problems can work their way into more general views of obesity. For example, many people believe that obesity is much higher in the Southeastern United States but Allison and colleagues, using direct measurements, found that the West North Central and East North Central Census division have higher prevalence. Likewise, Hattori and Sturm found that approximately one in six to seven obese individuals were misclassified as non-obese due to underestimation of BMI.

One of the most disturbing research flaws was a paper published in 2013 by Hand, Hebert and Blair which found that across the 39 year history of the NHANES, energy intake data for 67% of women and 59% of men were “not physiologically plausible.” The authors stated, “The confluence of these results and other methodological limitations suggest that the ability to estimate population trends in caloric intake and generate empirically supported public policy relevant to diet-health relationships from US nutritional surveillance is extremely limited.”

The Body Mass Index (BMI) has been problematic for some time, resulting in misreporting and misclassification. (See article)  Another study found that 29% of subjects classified as lean and 80% of individuals classified as overweight according to BMI had a body fat percentage within the obesity range. Cardiometabolic risk factors were higher in lean and overweight by BMI-classified subjects with percent of body fat within the obesity range. A study using bioelectrical impedance analysis to estimate body fat found that BMI-defined obesity was present in 19% of men and 25% of women, while percent body fat-defined obesity was present in 44% of men and 52% of women. Again BMI had a high specificity but low sensitivity. They found a BMI cutoff of > 30 misses more than half of people with excess fat. They speculate that this might explain the unexpected better survival in overweight/mild obese patients.

Klesges and colleagues examined 77 papers in the important area of prevention of childhood obesity from 1980 to 2008. They found all studies lacked full reporting of generalizing elements including the intervention staff, the implementation of the intervention content, costs and program sustainability. Somewhat similar results were found in 27 publications covering community-based interventions for diet, exercise and smoking cessation. Dzewaltowski et al found that while 88% reported participation rates among eligible members of the target audience, only 11% reported the “adoption” rate among eligible community-based organizations or settings. They also found few studies reported on the representativeness of the participants. Few reported whether individuals maintained the behavior change or whether organizations maintained or institutionalized interventions.

These disturbing results come amid profound doubts arising in several areas, not only psychological science, but  cancer research where several large pharmaceutical have reported that many studies had results which could not be replicated. Concerns go back to a paper by JP Ioannidis, “Why most published research findings are false.”  Ioannidis was an author of another paper which found that many observational studies published in the New England Journal of Medicine, JAMA and the Annals of Internal Medicine made clinical practice recommendations without stating the need for a randomized clinical trial.

In 2010, Graham Colditz noted that, “Prevention trials recruit large numbers of healthy participants, offer them a therapy and then follow them over many years because the chronic diseases being prevented are relatively rare. With substantial noncompliance (often in the range of 20% to 40% over the duration of the trial), an intention-to-treat analysis is no longer unbiased, but rather gives a biased estimate of the effect, typically underestimating the magnitude of the association that is seen in observational studies in which those participants who have had exposure to a particular lifestyle component are compared with those without such an exposure. “

Tajeu et al reported that, in the peer-reviewed obesity literature, of 855 articles examined, 7.3% presented odds ratios. Of these, 23% were presented incorrectly. Overall, almost one-quarter of the studies presenting odds rations misinterpreted them. Menachemi et al reviewed 937 papers in the nutrition and obesity literature found that nearly 10% had overreaching conclusions.

These problems are only amplified when the university or journal press office writes up a press release and then, in many cases, the media boil down the findings to a 10-second sound bite. Funding agencies, journals and professional societies need to more diligent in making sure that obesity research is conducted at the highest level of scientific accuracy for many lives and millions of dollars may be affected.

 

Leave a Reply

CAPTCHA Image
Refresh Image
*