Harsh Critics of NREPP Who Fudge Their Homework

0 Comments | Posted

Morning Zen Guest Blog Post ~ Dennis D. Embry, PhD ~

When I worked in the US House of Representatives as U.S. Capitol Page fifty years ago, I learned a valuable lesson: People who loudly accuse others of a heinous fault are often quite guilty of that fault themselves—sometimes unconsciously.

Since the election, there has been much ado about fake and bad science, often by people who actually have no real record of conducting high-quality research—let alone submitting their work the to the night of long knives called peer review or having their work reviewed by rather august entities such as the National Academy of Sciences, the Institute of Medicine, a Surgeon General’s Report or by my fellow prevention scientists. Blind peer review is often worse than being on a reality TV program like Survivor, Big Brother or Naked and Afraid.

When talking about the National Registry of Evidence-Based Programs and Practices (NREPP), one needs to discriminate between legacy program reviews and newly reviewed programs.  A recently published article in the International Journal of Drug Policy by Dr. Dennis Gorman (Has the National Registry of Evidence-based Programs and Practices (NREPP) lost its way?), offers a critique of newly reviewed NREPP submissions, asking if NREPP “lost its way” way from the high standards of review of the 300+ legacy programs. The legacy programs are often the most scientifically proven strategies in either prevention or treatment.  Dr. Gorman offers a critique of the 100+ newly reviewed programs, and he makes a point that new reviews are less stringent.

The legacy programs are widely cited in other reviews (e.g., Blueprints, IOM reports, Surgeon General Reports), and have a deep level of prior research: multiple investigators using the very best experimental designs such as comparative effectiveness trials, long-term follow up, and systematic replications by different, independent scientists across the world. Most of the Legacy Programs represent the best scientific investments by the U.S. National Institutes of Health, Centers for Disease Control, and other federal agencies as well as foundations or the European Union.

Dr. Gorman writes that the best science involves more than one study. He’s correct, and it can be a very large task to review all of the studies on some well-proven prevention or treatment strategies.  For example, Psycnet.apa.org lists 149 studies or publications on the good behavior game, with 23 references to “randomized” trials. The National Library of Medicine lists 63 publications, and 27 papers involving randomized control trials—some with very long follow up.  And I do have widely acknowledged conflict of interest as the main vendor of the Good Behavior Game.  I cannot speak for other scientists, but I specifically require that any and all findings be published in keeping with my commitment to science as a learning platform.

Implicit in Dr. Gorman’s critique is the notion of reliability and validity, which represent measures of “truthiness”—a clever term coined by Colbert.  Higher quality research typically reports on measures of reliability and validity, not just statistical significance. A statistical difference could easily .01 or .001 but be meaningless in practice. For example, I’m pretty sure that most Americans know that texting and driving are potentially harmful with high levels of statistical significance, but that doesn’t stop people from driving and texting (social validity).

Dr. Gorman’s critiques always demand transparency and good science of others’ research [1-4]. Based on his own criteria, his current paper does not rise to the level of good science by his own standards, which is ironic given his calls for greater transparency in prevention research [3, 5, 6].  His paper does not report the coding structure or provide a link to the coding structures used, nor does his report provide any reference of inter-observer agreement on his ratings of poor science.  In other words, no independent party could easily replicate his findings using his methods. Inter-observer agreement is foundational to good science [7, 8].

I am bothered by a curveball in the paper. Dr. Gorman claims that recent reviews contain programs that are considered potentially harmful—specifically naming EMDR (“eye movement desensitization”). That is a big claim, which deserves citations—yet his citation is about the definition of harm, rather than actual putative research on such harm.  At the National Library of Medicine (www.pubmed.gov), there 454 publications on EMDR, and 129 of them appear to involve randomized-control group studies. If one searches "’eye movement desensitization" AND randomized AND harm, there is one study [9]—a multi-single blind clinical study that concludes: “The results from the post treatment measurement can be considered strong empirical indicators of the safety and effectiveness of prolonged exposure and EMDR. The six-month and twelve-month follow-up data have the potential of reliably providing documentation of the long-term effects of both treatments on the various outcome variables. Data from pre-treatment and mid-treatment can be used to reveal possible pathways of change.”

Thus, Dr. Gorman “research” needs to be judged by his own standards.

PS. Are the new NREPP reviews weaker? Possibly. Unlike the legacy programs, there is no numeric score for the newer reviews. It’s harder for NREPP consumers to compare programs and practices among the newer reviews versus legacy programs. All this is appropriate for the SAMSHA Advisory Committees to take up, and to get consultation from entities that do this regularly like the Institute of Medicine and Surgeon General who were chartered to use the National Library of Medicine (www.pubmed.gov) along with scientists who do this kind of work. 

References Cited

  1. Gorman, D.M., The irrelevance of evidence in the development of school-based drug prevention policy, 1986-1996. Eval Rev, 1998. 22(1): p. 118-46.
  2. Gorman, D.M., The best of practices, the worst of practices: The making of science-based primary prevention programs. Psychiatr Serv, 2003. 54(8): p. 1087-9.
  3. Gorman, D.M., Can We Trust Positive Findings of Intervention Research? The Role of Conflict of Interest. Prev Sci, 2016.
  4. Gorman, D.M., J.S. Searles, and S.E. Robinson, Diffusion of Intervention Effects. J Adolesc Health, 2016. 58(6): p. 692.
  5. Gorman, D.M., Has the National Registry of Evidence-based Programs and Practices (NREPP) lost its way? Int J Drug Policy, 2017. 45: p. 40-41.
  6. Gorman, D.M., A.D. Elkins, and M. Lawley, A Systems Approach to Understanding and Improving Research Integrity. Sci Eng Ethics, 2017.
  7. Sidman, M., Tacitics of Scientific Research. 1988: Cambridge Center for Behavioral;. 428.
  8. Cook, T.D., Quasi-Experimentation: Design & Analysis Issues for Field Settings. 1979: Houghton Mifflin.
  9. de Bont, P.A., et al., A multi-site single blind clinical study to compare the effects of prolonged exposure, eye movement desensitization and reprocessing and waiting list on patients with a current diagnosis of psychosis and co morbid post traumatic stress disorder: study protocol for the randomized controlled trial Treating Trauma in Psychosis. Trials, 2013. 14: p. 151.

* * * * * *

embryDennis Embry, President/Senior Scientist at PAXIS Institute – Dennis D. Embry is a prominent prevention scientist in the United States and Canada, trained as clinician and developmental and child psychologist. He is president/senior scientist at PAXIS Institute in Tucson, Arizona. Dennis Embry serves on the scientific advisory board for the Children’s Mental Health Network and the U.S. Center for Mental Health Services Advisory Council.

Comments

Leave a Comment