EFSA, Bias, and Titanium Dioxide

EFSA cites a paper that lacks a vehicle control in addition to several papers with flawed study designs and inappropriate statistical analyses. All of this increases the risk of bias in EFSA’s Opinion on E171 (Titanium Dioxide).

by Lyle Burgoon

When the European Food Safety Authority (EFSA) reviews the science about a chemical, they review all of the science, right? Like, they look at the statistical analyses and the study design, right? Because those two factors are what determines the all-important p-value (which we’ve already discussed shouldn’t be treated like a gatekeeper, but regulators continue to do so in violation of the guidance from the American Statistical Association). And study size, a part of the study design, is also critical to consider.

Nope — EFSA does not look at the statistical analysis or the study design. See for yourself, here. To be clear, EFSA does mention study design several times in that Appendix. But they’re not talking about aspects of the study design that will impact or bias a p-value. They are talking about things like dye interference with nanoparticle counting approaches, the type of study (e.g., cell counting), and relevance of the test system. When EFSA says study design they don’t mean “study design” the way a statistician means it.

So when I say “study design”, to be clear, I mean are the animals housed in the same cage (e.g., nesting), are the animals all siblings (e.g., nesting), were assays run on separate days (e.g., day effects), were the cells in the same incubator (e.g., incubator effects), were the same lots of media and constituents used (e.g., media and constituent effects). In other words, when I say study design, I mean identifying and taking appropriate steps to minimize the bias/variance associated with specific non-experimental sources of variability. You know — the kinds of things that impact a p-value.

EFSA and Risk of Bias

What this all means is that EFSA likely has a false positive problem on its hands. In other words, there is a high risk of bias towards results that say titanium dioxide is likely genotoxic when in fact it’s probably not.

So let’s take a look at some the studies EFSA is relying upon that suggest titanium dioxide is genotoxic.

What’s Wrong with Kang et al.?

EFSA used Kang et al. (2008) to argue that there was positive evidence of genotoxicity.

So what’s wrong with Kang et al.? For starters, the authors used cells from just 1 human volunteer. That’s a bit of a non-starter. We cannot possibly make any inferences about how the human population might respond to a chemical exposure based on the results in a single person. The fact that this even got published is striking to me.

No one can perform statistics on cells from a single person. The cells, because they are from a single person, are not independent samples — they are too highly correlated.

Beyond that, the statistical analysis was not done right. The authors start off with an ANOVA. They follow that up with a Mann-Whitney U test as a post hoc test. The purpose of a post hoc test is to identify the specific group-wise comparisons that are significant. The problem is that the authors need to control the Family-Wise Error Rate (FWER). The Mann-Whitney U does not perform any type of FWER. Thus, the false positive rate is automatically higher for this approach. In other words, any significant findings are likely false positives.

Translation: any time this study concludes there is genotoxicity is likely false. In my opinion, this paper never should have been published and should be retracted as it is misleading.

Continue reading and learn what is wrong with the other papers used 

Dr. Lyle D. Burgoon is a toxicologist, biostatistican/data scientist, and risk assessor. He has over 15 years of experience as a government risk assessor, senior science and policy advisor, and as a private consultant. He is currently the President and CEO of Raptor Pharm & Tox, Ltd.

Source: Toxic Truth Blog

You might also like