Historical Interviews
[Editor’s Note:
The appearance last summer of a highly critical article about
epidemiology with multiple quotations from prominent epidemiologists
and statisticians has been fueling discussions about the field ever
since (Science, July 14, 1995, pp. 164 - 169). Examples of such
discussions known to Epi Monitor include a departmental seminar that
was held at Johns Hopkins and a student organized panel discussion at
Harvard on the future of epidemiology. A special session to revisit
this topic is planned for the upcoming SER meeting.
Given the
severe nature of the criticisms and the importance of the topic for
all epidemiologists, the Epi Monitor has reported on these events over
the last several months. This month our continuing coverage takes this
entire issue and goes behind the scenes to interview Gary Taubes, the
Science correspondent who prepared the article. The interview provides
valuable insights into the mind set of this national science reporter
and helps epidemiologists to better understand the motivations and
tactics of journalists. Also, the interview gives Mr. Taubes the
opportunity to expand in detail about his views on the shortcomings of
epidemiology. This interview gives epidemiologists a more in-depth
understanding of his criticisms than is possible to get from the
original Science article. The interview is lengthy; however, we
believe our readers will get a good return on their investment. We
welcome your comments and questions.]
Gary Taubes Faces
Epidemiology
Epi Monitor: I
think the readers will be interested in knowing something about your
background and how you came to write this article about epidemiology.
What part of the country are you from?
Taubes: Well, I
was born in Rochester, New York and moved to Washington when I was 12.
I went to college at Harvard and studied physics. I learned little. I
got a C- in quantum physics, and my advisor suggested I try law as a
career. I went on to get a Masters at Stanford in aeronautical
engineering because, at the time, I thought I wanted to be an
astronaut. I had always wanted to be an astronaut, but I weighed 220
pounds. I was a football player in college.
Epi Monitor:
You were 220?
Taubes:
Actually, I still weigh 220. There was little call for 220 pound
astronauts in 1978, so I went to journalism school at Columbia. I
wanted to do investigative reporting, but since I hadn’t worked for
any newspapers, I couldn’t get any good jobs. Actually, I had three
possibilities. One at the Dallas Morning News, one at CNN in Atlanta
and one at Discover in New York. Visiting Dallas ruled out the Morning
News, and CNN didn’t allow smoking in the newsroom, so I became a
science writer by default.
Epi Monitor: So
you started as a reporter at Discover magazine?
Taubes: Yes, in
1981. I started doing what every science writer does, which is writing
about good science, scientific breakthroughs and the like. In 1984 I
went off to Geneva to do a book about an experiment in high-energy
physics at CERN. Carlo Rubia, who had already come up
with the Nobel Prize qualifying discovery of two particles known as W
and Z particles, claimed that he was onto a discovery that was even
more important than that which everyone knew would get him the Nobel
Prize, and which would be the greatest breakthrough in physics in 40
years.
Physics has a sort of accepted
theory of the universe known modestly as the “standard model” and
everything that had ever been discovered, every experimental finding
that had ever been confirmed, had fit the standard model. For years
physicists had been looking for physics beyond the standard model and
Carlo claimed he had found it. It’s rare that somebody predicts a
great discovery. So I asked if I could go and sit at the experiment
and maybe write a book about it. He said okay, so I went off to Geneva
in September 1984 and lived at CERN for eight months in a hostel. I
tried to cover several hundred physicists and watched this great
discovery vanish. As it turned out, the phenomena that Carlo thought
were indicative of a great breakthrough were actually the product of
parts per billion artifacts in his equipment and statistical
fluctuations.
I wrote a book called Nobel
Dreams, (Random House, New York, NY, 1987) and it was the beginning of
my education in how hard it is to do good science. There were a lot of
extremely bright people on these experiments, which cost tens of
millions of dollars, and it was still extremely difficult for them to
come up with the right answer.
Shortly thereafter, I wrote a
piece for Discover magazine in which I talked about how hard it is to
do good science. I counted up the number of discoveries that had been
made in high energy physics between 1977 and 1987 and that had made it
to the New York Times. It turned out there were 12 and nine of them
later turned out to be wrong. The three that were right had been
predicted by the standard model, so the physicists knew exactly what
they were looking for.
Epi Monitor:
These were all discoveries in physics?
Taubes: These
were discoveries in high energy physics, which is a relatively clean
experimental subject compared to epidemiology. Being predicted by the
standard model is kind of equivalent to having biological
plausibility, even strong biological plausibility.
Epi Monitor:
Were you still working for Discover magazine when you came back from
Geneva?
Taubes: Yes, I
was a contributing editor. I started doing other pieces on
controversial science. In March 1989, my publishers at Random House
asked me if I wanted to write about cold fusion. It fascinated me
because it seemed so obviously wrong. Here was this huge scientific
controversy and a lot of scientists were putting their reputations on
the line for something that was pretty obviously just dead wrong. I
thought that was going to be an easy nine month book and I ended up
spending three years on it and getting obsessed with it.
The book came out and was called
Bad Science: The Short Life and Weird Times of Cold Fusion. (Random
House, NY,NY, 1993) That actually got me into epidemiology because
several of the physicists I got to know well while doing the book
said, “well, if you think cold fusion is bad, you should look at
electromagnetic fields from power lines and cancer.”
So, I started looking into that
and found out that there were some amazing parallels between EMF
issues and cold fusion and how bad science is propagated.
I also started questioning
epidemiology because the EMF finding was based almost entirely on
epidemiology. There’s this key paper in the field, which had gotten a
lot of publicity; it had pretty much pushed people over the edge into
believing that electromagnetic fields could cause leukemia. So, I was
curious about this, and I sent a copy to Epidemiology editor,
Ken Rothman, and he read it over. Ken said it was decent. He
said it was good epidemiology. And the funny thing was, here was a
paper which anyone who hadn’t gone in with any preconceived bias would
have said was a null result. The investigators had over 600 possible
associations of which they had found roughly 20 that were significant
at the 95 percent confidence level. They would have expected 30 by
chance alone. And they called this evidence of a positive association.
I suddenly started thinking if this is good epidemiology, what’s the
rest of the field like? In good experimental science, for instance,
you’re not to throw out two-thirds of the data right off the bat
because it’s negative. They threw out two-thirds of the data by trying
three methods of classifying exposure and concentrating on only the
one measure—calculated magnetic fields from power lines—that gave the
results I just mentioned as though it were definitive evidence. To me,
this violated everything I knew about good science. And as a matter of
fact, it fit in perfectly with everything I had learned about
“pathological science.”
Epi Monitor:
What is the definition of pathological science?
Taubes: This is
a term coined by Irving Langmuir, a Nobel prize
winning chemist. Langmuir described pathological science as “the
science of things that aren’t so,” and further stated that “these are
cases where there’s no dishonesty involved, but where people are
tricked into false results by a lack of understanding about what human
beings can do to themselves in the way of being led astray by
subjective effects, wishful thinking, or threshold interactions.” Then
he gave symptoms of pathological science.
Epi Monitor:
Let’s try to go back for a moment. You spent three years instead of
nine months on the cold fusion story and wrote a book. When was Bad
Science published?
Taubes: That
came out in 1993.
Epi Monitor:
Then what did you do?
Taubes: Then I
went back to freelancing. I wrote a piece for the Atlantic Monthly on
EMF called “Fields of Fear,” that was published in November 1994. And
then from doing the electromagnetic fields piece, that’s what got me
into wondering about epidemiology.
Epi Monitor: So
now are you working more for Science or are you just freelancing?
Taubes: Well,
I’m a correspondent for Science, so you could say I’m a contract
writer.
Epi Monitor:
And do you get to pick your own topics? How does that work?
Taubes: It’s a
collaboration with the editors, but I tell them what interests me,
like the epidemiology piece. I say “let me do a story about
epidemiology because there’s a real interesting story here,” and they
can agree or disagree about how interesting it is.
Epi Monitor:
I’m not an expert in reading investigative reporting, but I can see
that a lot of interviews and work went into that epidemiology piece.
How long did it take to do that article?
Taubes: Well, I
actually worked on it off and on for a year. It was a tough piece to
write, obviously.
Epi Monitor:
Are you writing any other books now?
Taubes: I’m
looking for books. I would love to write a book on experimental
science. I thought about doing a book on epidemiology. A book about
things that won’t kill you, but...writing a book saying something will
not kill you tends not to sell as well as saying that something will
and the government is covering it up. On the other hand, to survive as
a freelance writer, I have to generate some 50 to 100 story ideas a
year which gets tiring, so the alternative is to do a book which has
it’s own kind of misery, but there’s a lot more creative satisfaction
to it...
I enjoy getting taken seriously
by the scientific establishment. I’m not quite ready to just go out
and write what I think the lay-public might gobble up. Although over
the years, a physicist friend and I used to joke about creating a
system of astrology based on quarks and writing a book on it so we
could retire to Paris.
Epi Monitor:
Some epidemiologists have said they believe you were primarily writing
this article just to get attention. How do you respond to that?
Taubes: Well,
this is a classic criticism of journalism. When I was doing this
fusion book, I had cold fusion supporters saying the only reason I was
knocking cold fusion was because that would sell better. Everybody
says you do it for the money. The book ended up taking three years
because I got obsessed with getting it right. By the time I was done,
I was $30,000 in debt. You can't write for the money. If you write for
the money it’s a lousy job. Especially anyone smart enough to do
journalism well. Like anyone smart enough to do science well, you
could have made more money going into business. You write because
you’re more or less cursed with being a writer. There’s a lot of
intellectual freedom to it. I wrote the epidemiology piece because I
got fascinated with the question of its limits. One of the benefits of
being a journalist is you can get paid to satisfy your curiosity, but
you have to write about the end result.
I sit alone by myself in my
apartment all day long which some people think is great, but if you
imagine what it’s like to be alone for 10 years in your apartment, it
starts to look not so great. I do a lot of stuff for the money and I
like to live well. But once you start a story, the better the
story—it’s like an obsession. It’s like a hunting dog. You follow it
and follow it and you want to be able to answer every question. The
problem with cold fusion is I wanted to know what happened. There’s
this one crucial moment and I wanted to keep reporting until I knew
exactly why everyone did what they did. In fact you almost have to do
it until you know it so well that whatever decision they made seems
inevitable. Especially when your two main subjects aren’t talking to
you, which they weren’t in this case. Once you start this kind of
investigation, there are a lot of similarities between journalism and
science. You make a hypothesis and you test it and you have to try and
tear down your results to see if you’re deluding yourself. You have to
make sure you have the data to support your claims. You can’t over
interpret the data. I’m always arguing with my editors in Science that
they’re trying to over interpret data. I’ll say, this is what I have,
that’s why there’s a caveat in there. Don’t take the caveat out
because I can’t stand behind it without the caveat. You always have to
doubt your own findings. You can’t fall in love with your results.
Epi Monitor:
Speaking of not believing, you quote Sander Greenland
as saying that “sinning is believing in your results.” Is not the
reverse true? Is it sinning not to believe at some point? Particularly
in epidemiology, if you cannot believe you cannot act, and if you
cannot act then that is not public health.
Taubes: Well,
this is the problem. This is what it all comes down to. I didn’t
provide any real solutions in the epidemiology piece because I didn’t
know any. Doing what I do is the easy part. I don’t have to provide
solutions; I only have to criticize. And it’s very easy in
epidemiology. My favorite quote about science, which I first saw
sitting on the desk of a physicist at MIT is by an astronomer named
Harlow Shapley at Harvard. He said that “a hypothesis
or a theory is clear, decisive and positive, but it is believed by no
one but the person who created it. Experimental findings, on the other
hand, are messy, inexact things, which are believed by everyone except
the person who did the work.”
It’s extraordinarily easy to be
fooled when you’re doing experiments. This is what I was getting at in
the epidemiology piece. Everything these guys are finding are subtle
effects. You’ve got to be critical. Science can’t exist without
critical thinking, without skepticism. You have to be critical of your
own results. You have to try and prove you’re wrong. I know physicists
who will spend two years trying to prove that the phenomena that they
have apparently discovered in their experiment is actually an artifact
or a fluctuation. Only when they fail to prove that that’s the case
will they publish. This is how I learned experimental physics should
be done. Now, physics isn’t epidemiology. We know that. You run your
experiment, you get some signal, and you assume that nature and God or
whoever are conspiring to make you make a fool of yourself. So you
spend however long, six months, a year, trying to find out how that
signal is phony. Is it an artifact or statistical fluctuation? So, you
spend your six months trying to prove you’re wrong and then if you
can't figure out how you screwed up, then you hold a seminar. You
still haven’t thought about writing a paper yet, you hold a seminar
and you might hold a dozen seminars in different places and say, look
I did this experiment, I got this silly signal here that I can’t get
rid of and could you guys show me how I’m wrong. And then if nobody
can do it, nobody can show you where your mistake is, then you publish
a paper. Finally— this might be two years later—the signal might be
the greatest discovery in the history of physics, but you’re only
going to say “experimental evidence shows...” and then you’re going to
stick a question mark at the end. By the time this is all done, you’re
always working from the assumption that you're wrong because the odds
are very good you're wrong, if history is any indication. And still
with all that, by the time these physicists get to the point where
they publish a paper that gets into the New York Times, 75 percent of
the results are wrong, and 100 percent of those that don’t agree with
the standard model.
There’s a book called Reliable
Knowledge by an Australian physicist and historian of science named
John Ziman. He describes the front line of scientific
research as the place not to find believable results. He describes it
as the place “where controversy, conjecture, contradiction, and
confusion are rife.” Then he writes “the physics of undergraduate text
books is 90 percent true; the contents of the primary research
journals of physics are 90 percent false. The scientific system is as
much involved in distilling the former out of the latter as it is in
creating and transferring more and more bits of data and items of
information.”
Epi Monitor: A
recent book makes the point that scientists criticize the legal
profession all the time for the way they address things and how they
reach conclusions. However, science has a lot more in common with the
law than what most people think. Part of the similarity is the
construction over time of a body of knowledge.
Taubes: It is
somewhat the same...you’re building up a body of knowledge, but the
stuff that comes out at the front end is almost invariably wrong. This
whole thing with epidemiology came down to something that could be
described as the “best we could do” defense. Epidemiologists would
tell me, “we know that our results are likely to be wrong and we also
know that they’re going to get into the press anyway. It isn’t our
fault that they get into the press or it isn’t our fault that the
press over interprets them. There’s nothing we can do about it. Once
we publish, what are we supposed to do about it?”
It’s a given that epidemiologic
results get into the press. You know that. You know if you write a
paper saying that your study suggests that some risk factor might
cause some disease, it’s going to get into the press. You have to
figure out a way—the field in general has to figure out a way—to stop
it from getting into the press.
We’re going to ramble a little
bit. “Smoking study sees risk of cancer of the breast” was the
headline in the New York Times, May 5th. Here a study comes out that
looks at whether or not cigarette smoking could cause breast cancer.
And it finds not only an association, but a dose response.
Hill’s second criteria for
causality was consistency of studies which is what you can describe as
consensus. Here you had a consensus showing that smoking didn’t cause
breast cancer...you have 20 papers showing no association, and the
21st that shows an association makes the newspaper. The reporter says,
“although it cannot now be said that the new conclusions come closer
to the truth than those reached by the 20 other research groups that
examined active smoking linked to breast cancer, the current
epidemiologist believes his analytic approach has yielded a more
accurate result.”
For starters, the reporter
should have said, it’s unlikely that these conclusions come closer to
the truth because you’ve got, if nothing else, odds of 20 to one
against it. Then you’ve got the epidemiologist who believes his
analytic approach has yielded a more accurate result. You’ve now got
the experimentalist defending his own study. You’ve got him believing
it, which is not good science. And the reporter then says, “the
investigator attributed previous failures to detect a relation between
active smoking and breast cancer to...” and then he goes on to suggest
reasons why the other esearchers may have obtained the wrong results.
Everything I learned about experimental science suggests that a good
scientist would criticize his own study and assume he’s wrong because
if he goes into this assuming he’s right he’s going to delude himself.
Epi Monitor: He
is going to fall victim to pathological science?
Taubes: Yes.
Although it’s possible that he did criticize his own results. He might
have spent two hours discussing why he might be wrong and five minutes
on why the other studies might be wrong, and the reporter only chose
the latter. So it’s also possible that it was reported incorrectly.
Either way, we now have a controversy. The fact that you’ve got
consensus saying that there’s no association, you’ve got a 20 to one
situation which the press has now turned into a 50/5O proposition and
someone’s going to have to spend millions of dollars bringing it back
to whatever the right answer is. And at the end, like I said, no
matter how many negative results you get, it only takes one positive
result to create a controversy. It’s a fascinating phenomenon at work
here.
Epi Monitor: It
is, and I have not really heard epidemiologists talk much about this.
You talked about an unholy alliance in your Science article between
the press, the universities and the investigators. There is a
self-serving reason why these groups are more interested in the
positive finding. Maybe all parties need to recognize this tendency
and to set up safeguards to protect against it. What do you think?
Taubes: This is
what’s interesting. I talked a lot to Harvard epidemiologist,
Jamie Robins, about this and he was the only one that really
got it. It’s conceivable that every force in epidemiology pushes
toward the positive result. This is true of any science. That’s the
danger. You get a negative result, you are not going to get more
funding to pursue it, you have to think up another line of research.
Everything pushes you to wanting to find something. This is what
Langmuir talked about with “what human beings can do to themselves in
the way of being led astray.” So now let’s say you want to measure
that effect. You take every epidemiological study ever done and plot
the risk ratios and say okay, now we have all these studies and at
what point do the findings start getting real? At what point do the
risk ratios start getting real? Is there any way to calibrate
epidemiology? So you could say well, 95 percent of the time a risk
ratio of two turns out to correlate with a null result. The problem is
unless you have biological confirmation there’s no way to calibrate.
You need the biological data to say this is a real result. But it’s
conceivable that the true null result is up around two or three or
four or six because you’ve got such a huge bias pushing people to go
positive and reject the negative findings. It would be a fascinating
study to try to do and I had talked to Jamie about doing it. One of
the major problems with epidemiology is there’s no calibration. You
don’t know. You can’t say, here’s my zero point, because nobody knows
where the zero is. You don’t know how many negative results are
getting thrown out. You don’t know how many positive results are being
skewed.
Everybody talks about artifacts
and biases and how they understand those and deal with them but it’s
all very theoretical, and the field doesn’t have the checks and
balances that other sciences have. I hate to keep bringing up
physics,because I know when I interview these epidemiologists it would
make their skin crawl to have to live up to the standards of
physicists, but if I know that 75 percent of the results in physics
are wrong and they have high standards—they have standards that
epidemiologists bridle at being asked to meet—what percentage of the
results are wrong in epidemiology? Isn’t that something you’d like to
know?
Epi Monitor:
Yes, I guess so. But remember that the subject matter of epidemiology
is not some artificially created reality produced by a multi-million
dollar accelerator. Epidemiology may have its limitations, but in the
end it is studying the real world.
Taubes: Well,
what constitutes an artifact in a physics experiment is the equivalent
of a bias or a confounder in an epidemiology experiment.
Epi Monitor: I
am not sure. Biases in epidemiology can dilute an effect, but the
effect can still be real. In physics, you may create something which
is completely artifactual.
Taubes: That’s
the problem. You never know. When I talked to all these
epidemiologists, they kept bringing up the same point. Any
mismeasurement of exposure, they told me, is only going to work to
make the effect smaller than it really is. Rothman explained this to
me over and over again and I finally managed to understand this
concept. I’m willing to accept that. Ergo, every time you see a
signal, if there was any mismeasurement of exposure, it can only
possibly be larger: therefore, you have to pursue it and you have to
take it seriously.
But then you ask—which I did in
the article—give me examples in the history of epidemiology where you
started out with a small signal right on the borderline of noise, and
then proceeded to come to a better understanding of all your biases
and confounders and the signal got bigger to the point where it was
undeniable. Nobody could give me an example. Actually, I got one and I
put it in the article. But just one.
There’s an unspoken law of
physics that’s relevant here. It comes from a physicist named
Wolfgang Panofsky who is a brilliant scientist, who founded
the Stanford Linear Accelerator Center, and who has also been involved
for decades with science policy and defense technology policy.
Panofsky’s law was if you throw money at an effect and if it doesn’t
get bigger it means it’s not really there. Now that means if you see a
potential signal, something just at the edge of your experimental
resolution, and you do repeated experiments to try to isolate that
phenomenon, to increase the signal to noise ratio, and that effect
stays right at the limits of your resolution, right at the noise
level, it means it’s noise. It’s not really there. It doesn't exist.
This is what happened in cold
fusion. As the experiments got better and better and the error bars
got smaller and smaller, the signal they professed to see got smaller
and smaller as well. In epidemiology, if you throw money at an effect
and it doesn’t get any bigger, you do a meta analysis! This is true.
Take second-hand cigarette smoke. The argument is that you’ve done 30
or 50 studies. The reason you believe it is because the effect stays
the same size. But now a physicist would tell you that means it’s not
there. That means what you’re seeing is a combination of noise and
wishful thinking, and self-delusion because you should have been able
to figure out by now how to do the experiment better so that the
signal to noise ratio improves.
What I’ve been trying to
struggle with is just this question: the fact that to do the meta
analysis and it suggests a positive result, doesn’t mean that the
association really exists. I realize that epidemiology is much more
difficult experimentally than physics, that the world of human beings
is much messier than the world of elementary particles. But in
physics, by the time you have to do the meta-analysis you already
admit that the effect is not there. In epidemiology, you do the
meta-analysis, and conclude that the effect is there. So what’s going
on here? I don’t get it. I haven’t got a clue what the answer is. I
find the question fascinating and I think epidemiologists should
address it.
What epidemiologists are doing
may be pathological science. It’s conceivable since it fits a lot of
the symptoms of pathological science. For example, just being able to
throw out negative results in the search for the perfect exposure
classification. But, by doing that, you invite yourself into
pathological science. You’ve now thrown out one of the key elements of
defense— epidemiologists throw out a lot of the “immune system” of
science in their pursuit of positive signals. And the rationale, of
course, is that “people are dying out there.” This always reminds me
of the film Jurassic Park when the characters keep repeating that
phrase, “people are dying out there.” We can’t get too critical of our
results because people are dying out there. We should therefore accept
everything as a potential hazard. Every statistical fluctuation, every
95 percent confidence level finding has to be taken seriously because
“people are dying out there.”
Epi Monitor:
Well, obviously that’s not practical.
Taubes: Then
the question becomes, how much money are you wasting with false
positive results? Like electromagnetic fields and cancer. This country
spends a billion dollars a year, by some estimates several billion a
year, trying to ameliorate the effects of electromagnetic fields.
That’s the cost to society because epidemiologists “went off the
rails.” With the help of a very powerful journalist, how many other
examples are there like that?
Epi Monitor: I
don’t know if this is a fair question, but did you have a goal in mind
in writing the Science article?
Taubes: Well, I
wanted to find out if I was right or in essence I wanted to be
convinced that I was wrong and nobody managed to come close to
convincing me.
Epi Monitor:
That you were wrong?
Taubes: That I
was wrong about the similarity between epidemiology and pathological
science, that I was wrong about how close epidemiology comes to
pathological science and how dangerous that might be. And in essence
the answer to your question about why did I do so much reporting, why
did I talk to so many people, is because I kept searching for someone
who could convince me that I was wrong and I kept creating hypotheses
and looking for people I could test them with.
Epi Monitor: So
you were trying to prove yourself wrong and you don’t think you did.
Taubes: No, I
didn’t. I’m still open to being proved wrong and it's still
conceivable that I just don't understand that there's something about
the way epidemiology is done that makes it so different from what I’ve
learned, that I haven’t managed to apply my understanding of
experimental science properly to epidemiology.
Epi Monitor:
Assuming epidemiologists disagree with you, what is the difference in
their opinion between physics and epidemiology that explains why
epidemiology is not pathological science?
Taubes: The
funny thing is that the epidemiologists did agree with me! They
answer, “we know this already.” I wasn’t writing this article for the
best epidemiologists.
Everything I knew I was told by
the people I interviewed. Yes, it’s true that I select my quotes to
back the points I want to make, which is a way of saying I select the
data I want. But on the other hand, I wasn’t out to write a paper
about the victories of epidemiology. There have been some victories
and they were properly credited.
Epi Monitor:
That was one of the criticisms of your article. Epidemiologists said
it is unbalanced and that you were only talking about our warts. What
about our victories?
Taubes: Well,
what I am saying is the warts are huge. The victories are few, and at
this point, a whole field may be on the verge of propagating
pathological science, which means they cannot get good enough
resolution to identify the effects they’re studying. Epidemiologists
may be seeing and reporting that there are canals on Mars because
they’re looking at Mars through Galileo’s telescope. And that’s the
nature of the field and all the statistical wizardry in the world
isn’t going to change that because the experimental subjects are messy
and the artifacts and biases found are so huge and the signals are
small. Epidemiologists have to be willing to confront that. That’s the
problem.
Anyway, I was writing for the
press. That’s my answer to your question about my goal. I was writing
to my colleagues in the press saying to them, “would you please stop
treating these epidemiological studies as definitive? You're writing
about speculation as though it's definitive.” When I grew up I had a
Jewish mother who would always tell me that “they” say this and “they”
say that. You know that argument? “They” say that drinking coffee is
bad for you. I used to ask, who's “they” for Christ’s sakes? Are
“they” the best scientists in the world? Are “they” doing good
science? This was my answer to the press and I wanted to say, “stop
quoting ‘they.’ Start looking at these studies critically, and report
them accurately and in context.”
Epi Monitor: Do
you think Science was a good vehicle for achieving your purpose?
Taubes: I would
like to think that all my science reporting colleagues and all the
health reporters in the world read Science. But in fact, they probably
don’t. A good friend of mine is a science editor at one of the most
influential newspapers in the world and he tells me he doesn’t have
time to read Science, and he used to work for Science before he went
to this newspaper. So, is it a good vehicle? No, but, what is? There’s
a huge gap between reality and understanding and the best
epidemiologists know about the gap. That’s why I can quote them.
Epi Monitor: I
had the sense in reading your article that many of the epidemiologists
you quoted had been ambushed. Let me explain what I mean. The
epidemiologists probably spoke to you in such a way that you were both
in agreement. But bringing out these opinions in public, and quoting
them this way where you had them being self-critical of their own
discipline meant that they were in an awkward position.
Taubes: Oh yes,
sure. But they’re supposed to be critical of their own discipline. In
a way, I was making them look good. I was making them look like good
scientists.
Epi Monitor:
Epidemiologists are not expected to be critical of their discipline
but of the results of epidemiologic work.
Taubes: You’re
supposed to be critical of everything! I’ve never seen a field where
people are saying, “My God we’ve got to be more outwardly supportive
otherwise we could lose funding.”
Let’s talk about the
ambush...Actually, before the article was published one of the
epidemiologists who read it in draft, told me it was like when you
tell your mistress what kind of problems you have with your wife, but
you’re not telling your mistress for publication. And when push comes
to shove, you stay with your wife. There’s a lot of validity to that.
Epi Monitor:
About those quotes you got. I had the feeling that you had this grand
vision in mind, this grand hypothesis, this grand cathedral. The
epidemiologists were talking to you and providing quotations and had
no idea of what you were constructing. They were providing bricks
along the way not knowing about the cathedral. And then the article
appears in print and all these bricks fit into place beautifully and
it looks like all these epidemiologists were collaborators with you in
building the cathedral. But in reality, they were never really such
willing collaborators and would not have been if they would have seen
the cathedral.
Taubes: That’s
true, that’s very true. It’s a valid criticism and I admit it. On one
hand, it’s obvious. I’m writing the article. It’s always going to be
my grand cathedral, in which I synthesize what I have learned and put
it down on paper in a way that makes the strongest possible case. On
the other, and this is more subtle, as a journalist for Science, as a
lousy reporter for Science, I’m not allowed to editorialize.
Therefore, I have to get other people to say what I want them to say.
Or I have to find other people who will say what I want them to say.
So you hit it right on the head. That’s a very valid criticism. It was
my grand cathedral, it was my vision. And I have said this to people,
it’s ironic because I’m only a lousy reporter, I’m not a Harvard
professor, I don’t have that stamp of authority. I have to get people
to agree with me. That’s one side to it. For instance, I couldn’t
mention pathological science in the article because none of the
epidemiologists knew about it even though I started sending it to
people, hoping that one of them would say, “gee this is interesting,”
and make the comparison for me.
The other side is when it’s all
done, I send it to my sources to read and critique... three and maybe
four, had read my article in draft prior to publication...People had a
chance to look at the cathedral and say this is wrong.
Epi Monitor:
That’s hard to believe. That some people would say the kinds of things
that were quoted and not seek to have them removed prior to
publication if they were given the chance.
Taubes: Well,
they also had the intellectual integrity to stand by what they said,
even if in the long run they might come to regret it. As for those
people who say that I don’t say enough good things about epidemiology,
if they’ll read the article closely, they’ll find I talk about what
drives the epidemiological quest, and that epidemiology is the best
way to identify these risk factors.
There are things I added in the
article and they were added after people read the draft, about the
population studies for instance. That was added after people read the
draft because they said, “you’re not pointing out why we believe what
we believe.”
Epi Monitor:
What do you mean by the “population studies?”
Taubes: Why
epidemiologists believe that most of these diseases are caused by
factors in the environment that can be identified or hopefully can be
identified, about what drives the epidemiologic quest for risk
factors.
Epi Monitor:
That’s really the rationale for the field, not the victories.
Taubes: Yes. I
don’t deny that there’s a rationale for the field.
Epi Monitor:
The question I want to ask you is the essential question in my mind:
is this a problem of implementation and not a problem inherent to
epidemiology per se? One could conclude from your article that
epidemiology is so ill-equipped to meet the challenge that it does
more harm than good and we would all be better off if epidemiologists
re-programmed and went into something else.
Taubes: Not all
of them, but maybe some or most. It’s interesting; Ernst
Wynder, who is president of the American Health Foundation,
had an article in the American Journal of Epidemiology (p. 747 Vol.
143, November 8, 1996), responding to my article, in which he says I
failed to recognize that “in epidemiology, as in other branches of
science, there are good as well as inadequate studies and
inappropriate inferences.” Of course, I recognize it. I say it over
and over. He says I’m damning the whole field. I’m saying that there’s
a lot of bad epidemiology and it makes the press and it’s tolerated.
Epi Monitor: To
put it another way, the subheadline to your article reads: “the search
for subtle links is an unending source of fear but often yields little
certainty.” So the question is, should we stop the search?
Taubes: Well,
no. The point is it has to be done right and it has to be done like a
science. And it’s got to have the rules of an experiment.
Alvin Feinstein
said this in Science back in 1988. Feinstein said effectively the same
thing I did although he put it more scientifically. A firestorm of
criticism arose and people attacked Feinstein. Feinstein makes
mistakes, Feinstein’s sloppy, Feinstein’s this and that. The fact is
as far as I can tell, his examples might have been wrong, but his
criticisms were not that far off base. They fit what I know about
experimental science. He basically said that in a lot of epidemiology
you throw out the basic premises of experimental science because if
you include them you won’t find an association, or if you include
them, you won’t get funding.
Epidemiologists seem to be
remarkably tolerant of this sloppiness. But the fact is, I can name
you every physicist who made a major mistake on a paper, who published
a discovery that was wrong. And these guys are never allowed to forget
it. The guy who came up with the split A-2 in the sixties has been out
of physics for years because of his mistake. Elliott Bloom,
who came up with the Zeta particle in 1984... Elliott is a great
physicist, yet he still can’t get a drink with his friends without
them reminding him about the damn Zeta article. They’re vicious and
they’re vicious for a reason. There’s an infinite number of wrong
results. There’s only that many right ones. You need that criticism.
You need to be afraid to publish a wrong result.
Epidemiologists, on the other
hand, produce wrong results every day and they give the same defense.
If it takes five years to do a study, if the data stink, what are we
going to do? You can’t get funding just to do another five years. You
can’t get the multi-millions of dollars. And all the time people are
dying out there. “We’ve got to publish these dubious results, because
it’s the best we can do.”
You know, in cold fusion—the
worst scientists would tell me, this is a classic line—“Sure our data
stink, but give us the money and we’ll do the experiment right.” This
is an argument that bad scientists use all the time. “Okay, I know
that I’m claiming a discovery here and I know that my experiment
stinks, but now that I’ve claimed the discovery, give me the funding
so I can do it right.” This is bad science.
Epi Monitor:
The claim to asking for more money is to say that you screwed up one
time so bet on me again?
Taubes: No. The
good scientists do the experiment right before they publish, even if
it takes years. You work on it for ten years and you make sure that
what you publish is believable. That doesn’t seem to be a criteria in
epidemiology. Instead, epidemiologists say this is the best we could
do. So we’re going to publish the best we can do, the data are poor,
the interpretations are bit of a stretch, but let’s be serious. If I
work on it of course I can make the association go away if I want to.
But what if it’s real? If I make it go away I’m not going to get the
funding to do the experiment right to get the sufficient data so that
I can come up with the believable results. I’m going to publish a
study in which the data are borderline, the interpretation is even
more borderline, but by doing it I’m going to be able to say, “look,
there might be an effect here, please give me the money so I can do it
correctly.”
Epi Monitor:
Not that you can do it better necessarily but that you can pursue it
further. Is that what you mean?
Taubes: Yes, so
I can pursue it further. Well, ideally so that I can get a large
enough sample size to come to a meaningful conclusion.
Epi Monitor:
The first assumption in your story is that there is an “epidemic of
anxiety” caused by conflicting epidemiologic results. What evidence do
you have that these conflicting results are really causing us a
problem? Maybe we have to tolerate a few false positives to get to
where we want to go.
Taubes: That
may be true, but tell that to Dow-Corning. Tell that to the guy who
owns the house on the power lines who can’t sell the house. Tell that
to the makers of saccharin. These are interesting issues, but the
point is, once studied, something becomes guilty until proven
innocent.
Epi Monitor:
Does this constitute “epidemiologic malpractice?”
Taubes: Well,
everyone makes mistakes. Brian MacMahon came up to me
after a lecture I gave at Harvard on this subject and said, “you
probably think we’re pathological scientists for the mistake on
pancreatic cancer and coffee consumption.” Everybody in science makes
mistakes. That doesn’t bother me. The point is you’ve got to
understand how easy it is to make mistakes and how to keep those
probable mistakes away from my colleagues in the press.
Epi Monitor:
That’s an interesting point. Given that we are almost guaranteed to
get press, that there is a sort of built-in interest in what we do,
should that add an extra level of precaution?
Taubes: You
should be even that much more cautious. Let’s assume Ziman was correct
about epidemiology, too, and 90 percent of the results in the research
journals of epidemiology is wrong. The 90 percent in the physics
journals that’s wrong is not going to make the papers. Nobody cares
about it. But the 90 percent in the epidemiology journals do make the
press. And once they’re out there they don’t go away. I can’t put
Sweet n’ Low in my coffee without one of my friends saying “you're
killing yourself.” And it’s one of the few times that epidemiology
ever made an effect go away!
Criticism is never bad, it’s
necessary. The fact is, what I did in my article I shouldn’t have to
do. It should be done in the profession. The epidemiologists know
what’s wrong with the field, they know there’s a lot of junk out
there, they know there’s a lot of “do-gooder epidemiologists” out
there who think that the goal of being an epidemiologist is to indict
a chemical and sink some nasty chemical company. And I didn’t come to
that conclusion without help. It was epidemiologists who first said
this to me.
Conclusion
What it all comes down to is
this: what is the possibility that a lot of epidemiology is
pathological science? The problem is that you’ve got to understand
what pathological science is to recognize it. And the reason it’s
called pathological is because it’s tricky, it’s hard to diagnose.
There’s nothing easy about recognizing it. And if you are doing
pathological science, a) why, and b) how do you stop it. I’m not
saying all of epidemiology is pathological, but I bet you a lot of it
is. Once you start throwing out those defense mechanisms and
rationalizing away why you can’t be so critical, why you can throw out
negative results, once you start making excuses, you open the door to
publishing and to pushing a lot of junk and it gets expensive. I don’t
know how expensive, I don’t know what the risk is to society. How much
bad science do you have to allow so good science can get done? I can’t
answer those questions.
Published June 1996
|