Misreporting of Health Statistics by Federal Agencies: An Ethical Problem for Statisticians
Irwin D. Bross, PhD Former Director of Biostatistics Roswell Park Memorial Institute Buffalo, New York Originally received on January 5, 1985
Background
Nowadays, more than 90% of U.S. science is supported by government agencies. As Eisenhower predicted, this grant support is increasingly given to scientists whose conclusions support the policies of the funding agencies themselves. This government-dominated research might be called “official science.” I contrast this official science with application of standard scientific methodologies to factual evidence or “normal science.” When normal science leads to findings that contradict agency policies on health hazards, government agencies often misreport the findings. I want to highlight two very recent examples of such misreporting of health statistics, one on radiation and the other on Agent Orange.
The main reason federal agencies can get away with such malpractice is because they control the purse strings and so influence the main messages communicated by scientists. The agencies control virtually every professional function from scientific meetings, editorial content of journals, and the choice of speakers invited to symposia, by direct or indirect subsidies to medical and scientific groups. Moreover, decision-makers in these groups are often dependent on federal grants and try to curry favor with the agencies. Hence, the federal agencies currently have the power to suppress criticisms of their misreporting and/or other policies that endanger the public health and safety.
Although agency policies are often asserted in the name of science, they are determined by a political, and not a scientific process. My guess is that the 5-to-2 vote against publication of this article in the journal, the American Statistician, is fairly typical of the extent to which government agencies can now control ASA, AAAS, and other scientific societies. Normal science has become an endangered species. In my view, most of the ethical problems in science are ultimately due to the corruption of normal science by official science.
“We invite you to invest in the future of our profession,” says an ASA form letter for the building fund. However, the future of professional statistics is not insured by buildings or by ethics committees: It is built on the personal integrity of statisticians and by the responsible actions of their organizations.
Department of Energy Influence on ASA Coolfont Reports on Radiation Hazards
My commentary on the undue influence of government agencies on the editorial content of ASA journals (1) was ignored by those responsible for this bias. Thus, Amstat News announces that Coolfont IV on “Radiation and Health” will “be published in the early fall” by ASA (2). Despite the claims that these are open meetings, they are organized, funded, and mainly attended by federal staffers and persons receiving agency funds (with very few exceptions). For all practical purposes, only papers that do not conflict with official positions are invited to these meetings.
For example, the federal sponsor of Coolfont IV (the U.S. Department of Energy) is supposed to be concerned with occupational safety, but the findings of my numerous publications on radiation hazards in the workplace conflict with the official position. Here is an example of what will NOT appear in Coolfont IV. As an original member of the Oversight Committee of the study of nuclear submarine workers at the Portsmouth Naval Shipyard (PNS) (and the only professional biostatistician on the panel), I have repeatedly pointed out that the government has misreported the PNS statistical evidence as negative. Standard statistical analyses of the PNS data have repeatedly shown excess risks of leukemia and cancer (3,4).
In the latest, 1984, case-control study of leukemia, which was once again reported as negative (5), no less than 3 different statistical tests show a significant excess of myeloid leukemia among PNS workers with badge doses of one rem or more. Indeed, when the National Institute on Occupational Safety and Health (NIOSH) followed my advice on how to calculate relative risks, it found a five-vold risk in this category (6). Whenever the risk estimates conflict with the official policy that “low-level radiation is harmless,” government agencies consistently misreport their statistical results. While ASA may not be able to stop such practices, it should certainly not support them (1). There is no good reason for ASA to be house organ for the federal agencies, even if a few members benefit from such arrangements.
Very sensitive health issues are involved here. Thus, one reason for the official policies is to avoid paying fair compensation for injuries from low-level radiation (or other environmental hazards such as Agent Orange). Public attention is therefore focused on such issues. The plaintiffs, such as Utah civilians or Atomic Veterans, bitterly resent these biased government reports and those who defend them. So when ASA seems to take sides against the public, as it has done with the Coolfont series, the credibility of all professional biostatisticians is eroded.
Few letters that I received on my previous commentary commented on my specific concerns about radiation risks. Most, however, endorsed my two general themes. Several biostatisticians offered their own “horrible examples” of statistical reports of the federal agencies. Almost all letters were concerned with the lack of editorial sensitivity to undue government influence in the replies to my commentary. Several letters asked me to continue to speak out on these issues on behalf of the many ASA readers who object to ASA editorial policies but feel they have no say in them.
Hence, in my commentary, let us consider what can be done to make the public aware that professional statisticians are very much concerned about the honesty of statistical reports issued by their own government. In these ethical issues, it is particularly important to deal with concrete examples rather than abstractions. So let us consider a recent “horrible example” and what ASA could do about this specific case.
CDC MISREPORTING on AGENT ORANGE and BIRTH DEFECTS
Those of us without inside information would believe the media reports that the 1984 statistical study by the Centers for Disease Control (CDC) on birth defects in children of Vietnam veterans from their exposure to Agent Orange had turned out to be “negative” (proving that Agent Orange does not cause birth defects in offspring of men exposed to it). However, according to a Science news story (7), CDC has publically confessed—without realizing it—that they knowingly and deliberately misreported the statistical results of their study.
According to Science magazine, “CDC scientists argue in their report that although these results [showing increase in certain types of birth defects] are ‘statistically significant. . . they may not be biologically significant.’” ASA members who remember the cigarettes-and-lung cancer controversy in the 1950s may also remember this old alibi. It was used then by apologists for the tobacco industry to try and explain away the excess lung cancer among cigarette smokers. It is used here by CDC to try to explain away the excess spina bifida and other serious birth defects in the children of the Vietnam veterans who were exposed to Agent Orange.
In line with normal statistical and scientific practice, these statistically significant excess birth defects should have been reported as “positive” (however many qualifications might be added). For CDC to tell the media that the Agent Orange study was negative was both unprofessional and unethical, though not unusual. There is no “statistical controversy” here, only misreporting of the results.
Is there anything that ASA can do about this kind of bad statistical practice by a federal agency? If the ASA has the courage of its convictions, it certainly can help to stop such abuses. One of the main reasons why the federal agencies misreport their statistical studies is because they know they can get away with it. If there is going to be a backlash, they will think twice. Therefore, professional criticism by ASA would put a damper on the most flagrant abuses. Such courageous action by ASA would enhance the public credibility (and demonstrate the integrity) of professional statisticians. As a step in this direction, ASA can print this commentary without a long delay.
Should it do so? One argument against doing so is that it might jeopardize the Coolfont series. However, ASA can do without the few dollars ($5,000 according to the reply to my commentary) that ASA gets for printing the official story of “Radiation and Health.” From the letters that I’ve received, it would seem that professional statisticians value their integrity more than the $5,000.
In terms of CDC’s misreporting on Agent Orange and birth defects, apologists for CDC might argue that my criticism of CDC is unfair: why shouldn’t the biological as well as the statistical significance be considered? Any experienced professional statistician would have a simple answer: Statistical significance is determined by objective procedures. Whether or not there is biological significance is a highly subjective matter. CDC was supposed to provide an objective evaluation of its statistical data on genetic damage from Agent Orange. IT was not asked to give its opinions or restate agency policies on Agent Orange. Unfortunately, no matter hos simple and straightforward the statistical issues may be, apologists always try to confuse and obscure them, and generally succeed.
Would setting up an ad hoc ASA committee be a good way to resolve an impasse over Agent Orange? The record of such committees is not very encouraging. One or more apologist would automatically be included (for “balance”). While the open-minded members have no agenda, the apologists do. In committeemanship, persistence generally wins out over professionalism.
Thus, when an ASA committee reviewed testimony on leukemia among nuclear submarine workers, its report was so “balanced” (i.e., ambivalent) that either side could claim victory. The bargaining to reach a consensus (which is a political and not a scientific process) produces the kind of weasel-worded statement that does not inspire public trust in statisticians. The committee should have answered a simple, specific question: Did the Portsmouth data give results that are significant or not? Unfortunately, it seems to be almost impossible for committees to take a no-nonsense approach.
If we want a better public image, we are going to have to do a better job of dealing with public affairs. This means changing some deeply ingrained habits because there is no quick and easy fix. A key strategy would be to rely on the personal integrity of competent professionals, rather than on committees. A group of well-meaning persons who don’t understand the subject-matter area and have little experience in the kind of data-analysis used in public affairs will muddle the issues. Plain speaking by statisticians is what is needed to avoid the image of statistics as a “smokescreen.”
We should come right out and say that biostatistics provides objective procedures for reaching public health decisions from statistical data (8). From the standpoint of the public, this is our strength: It enables us to make statements which will generally be true and to make decisions which will usually be right. From the standpoint of government agencies, however, this objectivity is a weakness. The federal agencies want support for official policies—they are not concerned with what is true or right. By means of subjective opinion or interpretation, CDC tries to justify the official policy on Agent Orange. We would be inconsistent to condemn this practice and then to rely on equally subjective opinion or interpretation in the evaluation of a report. It is a betrayal of the basic principles of modern statistics.
While public relations gimmicks are often touted as the quick way to improve a public image for any profession, in the long run, the best way to earn respect of the public is for statisticians to do a good job in dealing with health hazards and other issues. If a panel cannot even agree on the “tools of the trade” and their proper use, the public can hardly be impressed with the objectivity of statistical methods. We must demonstrate by our actions (not talk) that there are methods (largely developed by R.A. Fisher) that work, and work well, in the real world when they are competently applied.
While those who control ASA editorial policy may not agree, both the public and the professionals know that the business of a statistician is to deal with facts in the real world. There is, of course, a big difference between the statistical methods that are useful in the real world, as opposed to academic mathematical exercises that are eminently publishable in ASA journals. The public image of professionals is not enhanced by such academic fun and games. So a first step toward a better image is to have a little space in ASA publications—not whole issues like Coolfont, but at least a few pages—to deal with major issues in the real world. The editorial decisions for this space should be made by working professionals.
For example, ASA might highlight a regular “horrible example” feature that would allow prompt exposure of statistical misreporting by the federal agencies. Judging by the examples in the letters I’ve received, Agent Orange would be just one of many such examples. Health is just one of the areas where agency policies bias the statistical reports; economic and other reports are also tailored to political policies. Perhaps there should also be an annual competition for a golden-fleece award for the worst statistical report of the year. The award could be publicized at the ASA annual meeting. Some of the federal agencies would make an effort to avoid getting this award.
To be realistic, before we can earn the respect of the public, we will have to try to put our own house in order. The same basic complaints about the way that ASA is run (e.g., the reviewing procedures, the content of journals, the format of meetings, etc.) have been made for years. They are ignored. What can be done about it?
Perhaps something that could be done if the ASA membership is ready for such a step. There is no reason why ASA elections have to be on personalities, instead of issues. There could be a reform slate for those who would like to see changes and a slate of candidates who support the status quo. Those of us wo are tired of the ASA being manipulated by insiders could vote for officers who were pledged to try and open up the organization.
Even though the phrase is often quoted “liars, damn liars, and statisticians,” this is clearly a slur on our integrity. Do we deserve it? While we have been inept in public affairs, we are certainly no worse than other professionals. Our misfortune is that when apologists want to tie an issue up in knots they generally call the impasse a “statistical controversy” (even though there would be no real disagreement between competent professionals). We also get flamed for the awful “statistical reports” of the government agencies, although often there are no professional statisticians involved.
If ASA had the gumption to deal with real world issues, instead of academic abstractions, and pointed out the statistical misreporting or blunders involved in these fabricated controversies and reports, it could show the public that the real professionals are on their side.
1. Bross, I.D.J. (1984). How to avoid undue influence of federal agencies on the editorial content of ASA journals. American Statistician 38:3, 226-28.
2. AMSTAT NEWS (Sept-Oct, 1984), 1984 Coolfont conference on radiation and health, 8.
3. Bross, I.D.J. (1981), Direct estimates of low-level radiation risks of lung cancer at two NRC compliant nuclear installations: why the new risk estimates are 20 to 200 times the old official estimates, Yale Journal of Biology and Medicine 54, 307-28.
4. Bross, I.D.J. and Driscoll, D.L. (1983) Data on lung cancer in radiation workers, Journal of the Royal Society of medicine 76(5):431-32.
5. Bross, I.D.J. (1984), Confirm or deny: nuclear submarine workers at Portsmouth naval shipyard had excess leukemia, unpublished report to National Institute on Occupational Safety and Health.
6. Bross, I.D.J., (1984) NIOSH, A case-control study of leukemia at a nuclear naval shipyard, draft report.