Potomac Highlands Watershed School

Reading Room

Understanding Science 

by Peter Maille

From Cacapon September 2002

Pick up the morning newspaper these days and you are likely to come across phrases like these:

  • "the research does not prove a cause-and-effect relationship...just an association." (Washington Post, Anti-Alzheimer's Supplements? Not Yet, February 19, 2002) and
  • "DNA in blood samples…matches 'to a reasonable degree of scientific certainty' DNA in Christopher's blood." (Washington Post, DNA Cited in Boy's Slaying, February 16, 2002)

Such language might sound like it was lifted directly from a scientific journal. Increasingly, however, technical jargon is becoming routine. But, while the topics touch us all, we do not always understand ideas like "degree of scientific certainty," or the difference between "cause-and-effect relationship" and "association." Luckily, one need not be a scientist. Below are a few tips you can use to make sense of science-based discussions.

Science tries to answer questions

Scientific study starts with a research question or "hypothesis," and then develops a study to answer the question. For example, a research team could hypothesize that runoff caused by a midsummer rainstorm would significantly increase phosphorus concentration in a stream, and then design a study that measures phosphorous in a specific stream before, during and after rainstorms. The team may well get good information about phosphorus in the stream. However, the study would also require a number of assumptions. Researchers may have to assume that wind conditions, air temperature, preceding weather or leaf-consuming insect infestations have no effect on the outcome of the study. They need to make assumptions like these because even scientists can’t control or measure every variable. Ben Franklin may have been thinking of this when he said, "We don't know one millionth of one percent about anything."

Because assumptions in science are inescapable, our hypothetical study can only say that phosphorus concentrations seem to behave a certain way. Thus, a central tenet of the scientific method--science cannot prove anything with 100% certainty. Rather, scientists use statistical methods to say that, with a specific degree of confidence -- for example with 95% certainty -- our study results "are not due to chance." The research team is then left to describe what the results "are due to" through a credible and logical discussion of their methods and reasoning.

So, when scrutinizing scientific statements, it is always wise to ask: "How certain are you of the results? Is there a large margin of possible error? Have other scientists replicated the results? Did the study have adequate controls that ruled out other factors that might be responsible for your result?"

But we never know for sure

Fortunately, science is explicit. A study's methodology is explained and key assumptions are spelled out. In the face of uncertainty, this permits objective review of a study's conclusions. A case in point--arsenic standards for drinking water. The Bush Administration delayed implementation of Clinton Administration guidelines on allowable concentrations of arsenic in drinking water. However, following a comprehensive review of the scientific literature by the USEPA and independently contracted experts in public health, it became clear that the science behind the Clinton-era recommendations was sound and the conclusions justified. In fact, the review concluded that arsenic’s health risks were even greater than earlier reviews had suggested. Consequently, the Bush administration has now accepted the proposed guidelines for implementation.

Conversely, in 1999 the WV Commissioner of Agriculture concluded that the Department's study of non-point source pollution showed that this type of pollution--in which pollutants wash off the landscape into rivers, lakes, and streams--was under control. However, measuring non-point source pollution requires sampling during periods of heavy, or at least normal, rainfall. Because 1999 was a severe drought year, a Cacapon Institute review found that the Commissioner's conclusion was premature. We responded with a press release explaining our position, and the debate took a step forward. In this case, officials interpreting the study results had not asked if another factor-the drought--might be responsible for the results--the drop in non-point source pollution.

Uncertainty becomes especially important when science indicates that difficult or costly change may be called for. In these cases, society may seize on the uncertainty as a reason to maintain the status quo. Here we often look for a consensus to form in the scientific community. This is not to say that scientists are free of bias or that their consensus is never wrong. What it does say is that the scientific community is more objective and informed on a given issue than society-at-large. For example, few issues are as complicated or as potentially important as global climate change. Society recently passed this debate back to the scientific community. In this case, The National Academy of Sciences was able to say that the information supporting claims of human-generated climate change met their threshold of certainty. There was less of a consensus on what the environmental and health effects of that change may be. And there was still less consensus on what we should do—or how much we should spend—to prevent potential problems. We are left to weigh the possible costs of action and non-action, but with greater scientific certainty that human induced climate change is, indeed, taking place. Again, Society takes a small step forward.

Further complicating matters, the status quo often benefits those most able to influence the debate, and costs those least able--the under-served and less informed in Society. For decades powerful tobacco companies were able to refute studies linking cigarettes with lung cancer by exploiting scientific uncertainty. Over time however, the scientific community came to a consensus and responded that the remaining uncertainty was absurdly small in relation to the evidence linking the two. Society was then able to act with more confidence.

And the discussion continues

Oftentimes the public discussion of scientific results becomes adversarial. When this happens, a characteristic set of objections may be raised and an objective review of the facts overwhelmed. Recognizing these objections, and knowing how to respond, can make all the difference when trying to sift through the information. Some common objections are:

"It's crackpot science." These claims seem most often to be made by people who do not understand an issue, or are predisposed towards an alternative conclusion because of a vested interest. We think that when there is a shift from objective discussion to name-calling then decision-making suffers. People concerned with finding "the best" outcomes rarely dismiss conclusions out-of-hand. Rather, because of the explicit nature of science, they review the information by asking questions like: "What are the strengths and weaknesses of the study methodology"; "Are the conclusions logically derived from the study results?"; "Can I reach a credible alternative conclusion that is equally well supported by the study results?"; and "How have other scientists addressed these issues and what were their conclusions?"

"You don't know that for sure." We think that arguments focused only on a study's uncertainty--what the study does not say--are often an attempt to shift attention away from what a study does say. Once we accept that there is no such thing as a sure thing, we can cease the impossible task of trying to eliminate uncertainty. Instead, we can manage uncertainty much like we manage risk in an investment portfolio-something to be optimized rather than eliminated. The questions then become: "Do we need less uncertainty?" "Is it possible to decrease the uncertainty?" and "Do we know enough to act?"

"This will cost too much." When it comes to making decisions, science will not tell us what to do any more than the nutritional information on a cereal box will define what we eat. In a sense, science produces a set of "facts and only the facts." Society is left to debate how to act much like a jury is left to deliberate after hearing a court case. And like a jury, when making its decision society works within a framework, in this case, of societal preferences, political imperatives, and economic constraints. When society's "deliberations" are disproportionately influenced by a group that insists on focussing on only one element of the framework, like cost, we see this as an attempt to impose a personal set of imperatives. We think that balance, although not necessarily parity, best serves society. In this case, an important follow-up question is "What are the long-term costs of inaction?"

As far as we can tell, industry and anti-industry folks, pro-business and anti-business people, liberals and conservatives all employ these strategies. But despite the chaos, or maybe because of it, science has brought us a long way. As the role of science in society increases, it will be up to informed and engaged people to sift through the noise and make the most of what science can offer.

To our readers: Please feel free to contact us with other examples of obfuscating arguments used to obscure scientific results, or case studies from the news that demonstrate these arguments in action.