Epidemiology, Risk Estimation and “Probability of Causation”

Epidemiology is the study of epidemics and their origins. It uses a variety of tools to ferret out the various components of a disease outbreak and to assign probabilities to various possible causes. One important tool is called a Geographical Information System (GIS) which plots the frequency of disease occurrence on a map or other spacial diagram and compares this to the presence and concentration of toxic or infectious environmental factors, thought to be associated with that disease, in the same geographical area. It was through the use of GIS that the origin and cause of a cholera epidemic in central London was discovered in the 1800s and this marked the beginning of the science of epidemiology.

The probability that a certain contamination is associated with a particular disease is determined through statistical methods which compare two otherwise well-matched populations, one with the contaminant present and the other with it absent. Using statistical sampling from the two populations one can estimate the mean and standard deviation for each population. Assuming that the populations are normally distributed and the possible confounding variables are well matched between the two samples, there are then statistical formulas for calculating the probability that the two populations are really the same or are really different. The assumption is that there is no difference between the two populations (this is called the “null hypothesis”). The models calculate the probability that this outcome could have occurred by chance if there was really no difference between the two populations. Normally one concludes that there is no difference if this probability is greater than 5% and that the difference is statistically significant if the probability is 5% or less; in other words we are willing to take a 5% chance that our conclusion is wrong. We also run the risk when using large, “pooled” data sets from all over the world that the uncontrolled variation becomes very large and the populations are no longer normally distributed. These unwieldy data sets often develop the characteristics of a Gauchy distribution where the addition of additional subjects to the population actually decreases rather than increases the accuracy of the estimate of the mean. The estimates of the mean become so incertain and the standard deviations become so large that the ability to determine a difference in the means statistically becomes almost impossible. Another common problem when attempting to demonstrate a negative, i.e., no association is that large amounts of uncontrolled variation actually cause the results to be biased towards the null, i.e., suggest no difference when in actuality there may be a significant difference. This outcome is likely to say more about the quality of the data than it is about any lack of association.

Once we have concluded that there is a significant association between a contaminant and a disease, we can then use more robust statistical methods to quantitatively estimate how much contamination causes how much disease. This process is called risk analysis or risk assessment and is far more problematic than simply demonstrating an association. Some compensation schemes developed by governments for toxic torts are based on the concept of “probability of causation (POC),” which is based on the legal concept of “more likely than not.” In this scheme, the probability that a measured level of contamination is likely to have caused 50% or more of the observed disease must be met in order to quality for any compensation at all. Some have suggested a partial compensation scheme based proportionately on probability levels less than 50%. A huge problem with this scheme is the fact that calibration curves must be developed from reference populations with known exposures and known disease outcomes. These reference population are few and hard to come by and in almost all cases contain very considerable inaccuracies and biases, not to mention possible deliberate distortions. The principal reference population which has been used to estimate the quantitative effect of ionizing radiation on human beings is the so-called “Life Span Study” conducted by the Atomic Bomb Casualty Commission (ABCC) on the survivors of the atomic bombings in Hiroshima and Nagasaki, Japan. This study has become the “gold standard” for the radiation risk estimates in humans. The following is an appendix from a letter I sent to the Secretary of Veterans Affairs during my tenure as a member of The Veterans Advisory Committee on Environmental Hazards (VACEH) explaining why I think the POC model is not a valid or useful model for compensation.

Appendix

There are three major sources of error in the “POC” model which has been recommended by the committee as a basis for compensation, to replace the current presumptive model. The first concerns the reference population from which the risk estimates are derived; the second concerns the mathematical model itself; and the third concerns the dose reconstructions for the veterans. It is difficult to determine how each of these large individual errors contributes to the overall error in the “POC” estimates. Each component of error is discussed below:

(1) The reference population, from which the cancer risk calibration curves (ERR/Gy) are derived consists entirely of the survivors of the atomic bombings at Hiroshima and Nagasaki. Since not all the survivors have died not all the cancers have been counted. Furthermore, since the Japanese population has a very different baseline rate for many cancers compared to the United States, the ERR/Gy rates must be “corrected” for a North American population. The calibration curves are derived from dose estimates based on self-report of the survivor’s location at the time of bombing and incorporate a substantial “correction” for self-reported shielding factors. Only prompt radiation from the bomb is taken into account in the dose estimates while residual radiation from fallout particles and activation products is ignored. In many cases the DS-86 dose estimates cannot be reconciled with the symptoms experienced by the survivors. Consequently, the Japanese government has already rejected the DS-86 dose estimates as the primary measure of eligibility for compensation.

Many of the control subjects in the Japanese study, who were beyond the 2 or 3 km limit, were among the early entrants into the city after the bombings and were contaminated by fallout and activation products. Although these early entrants have a statistically significant increase in the rates of several cancers compared to a national reference population, they are nevertheless included among the controls. Contamination of the control population has the effect of raising the baseline cancer rate and hence underestimating the ERR/Gy risk ratio. The excess relative risk model recommended by the Committee is especially sensitive to fluctuations in the baseline cancer rates. These calibration curves also fail to account for a substantial healthy survivor effect.

(2) Although the Committee letter states that it prefers a “POC” model to a presumptive model for compensation purposes, it fails to mention that the use of the recommended ERR/Gy ratios from the Japanese data does not, in fact, yield a true “POC.” Rather, it yields an attributable fraction (AF) based on a very specific linear no-threshold model. Each different model used for estimating risk yields a different AF, so there is no clear “scientific” justification for calling the risk estimate derived from one particular model the “true” probability of causation. Since the calibration curves from the Japanese data are based on a single, immediate exposure to a very large dose of gamma and X-rays, plus neutrons, and the internal exposures from fallout and activation products (which are largely alpha and beta emitters) are completely ignored, one can argue that those calibration curves should only be used in situations of comparable radiation exposure. The exposure to the veterans at bomb tests was very different from that experienced by A-bomb survivors.

A mounting body of scientific evidence suggests that low dose/dose rate exposures to alpha and beta emitting radionuclides, which are ingested or inhaled, can have a very different, and much larger, biological effect than that caused by high dose/dose rate external exposure to gamma and X-rays. In fact, to accurately assess the effect of internal exposures after the accumulation and distribution of radioactive elements in specific areas of the body, a process known as microdosimetry must be used to estimate the true dose to tissues or cells. Since cancer is a clonal event arising from a single precursor cell, the dose to individual cells is more important than an “averaged” whole body or organ dose. The microdistribution and localization of radionuclides within specific tissues and cell compartments must be taken into account when estimating the carcinogenic potential of a particular type of exposure. Different radionuclides with the same level of radioactivity can have very different biological effects depending on their localization in various body tissues.

Recent radiation experiments using cell cultures and new molecular and cellular biology techniques have demonstrated persistent genomic instability and bystander effects which cast serious doubts on the validity of the simplistic linear no-threshold (LNT) model currently used to estimate radiation risks. The LNT model requires extrapolation from high dose ranges of external gamma and X-rays to low doses ranges where the doses are mainly due to internal alpha and beta emitters. Such an extrapolation is not supported by accepted mechanisms of molecular and cellular radiation damage and therefore does not properly calculate the risk due to low doses of internal alpha and beta emitters. The model also fails to account for individual biological and genetic susceptibility to radiation which may cause one particularly susceptible subpopulation to bear a disproportionate share of the risk. For these reasons, the estimate of risk, ERR/Gy, and the corresponding confidence limits from the table in the Committee letter should be considered as highly suspect when used for individual risk estimation.

(3) Dose reconstruction for Atomic Veterans is probably the largest single drawback to the use of a “POC/AF” model for compensation. Despite the millions of dollars already spent by the Defense Nuclear Agency to reconstruct Veteran’s exposures to ionizing radiation, there is simply no way to accurately determine the quantitative and microdosimetric level received by each veteran exposed to bomb tests fifty years ago. This difficulty has been confirmed by National Academy of Sciences and GAO reports. Even if such a feat were possible, there is no single additive dose value which can combine the injury potentials from all sources, qualities and microdistributions of radiation into a single overall risk estimate for each cancer and non-cancer disease.

Many of the Atomic Veterans were marched into ground zero shortly after bomb detonation where they were exposed to fallout and activation products in much the same way as the early entrants into Hiroshima and Nagasaki, who arrived in the bombed out cities over a period of days to weeks. The veterans, on the other hand, entered ground zero within minutes of bomb detonation and their dose would be correspondingly greater. The film badges and other dosimeters worn by some soldiers at the tests gave some indication of the external radiation dose to X- and gamma rays but did not account for neutrons nor for the effect of internal exposures to alpha and beta emitters. The recorded exposure data are less than accurate and, in fact, there are many reports that these badges were insensitive, poorly calibrated and sometimes even discarded or misrecorded.

For the reasons listed above I believe that use of a “POC” model does not “eliminate the need for a presumptive list” and does not “explicitly and objectively account for the radiogenicity of a disease.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.