At its best, science seeks to constantly to revise itself with new insights and new discoveries. Hypotheses fitting the observed data are tested and revised and tested again, always seeking better understanding, a more accurate model. We can learn as much or more from a completely contradictory experimental result as we might have from an experiment supporting our guesses, because while supporting evidence might allow us to more firmly place a small building block, contradictory results can show us that our entire foundation is shaky to begin with.
At its worst, the pursuit of science becomes dogmatic: driven no longer by the desire to learn so much as by the desire to support or perpetuate an existing dogma: be it a determined advocacy of a particular hypothesis or agenda; or an end goal of an unending profitability.
This is where the issue of confirmational bias comes in. As soon as a reason exists to prefer one result over another: it also becomes much easier (however unconsciously) to frame the scientific question so as to achieve a desired answer, and even to accept data which support that answer over any evidence or body of literature which might suggest otherwise.
While the ideal of science is objective observation and analysis: human beings inherently are not objective. Rather, we view the world through a sophisticated filtre of preconceived notions which we have been building up since we first came into the world. Certainly this is nothing less than a survival technique – after all, we don’t want to have to determine again and again that a hot stove will burn us! – but it does clash head-on with the cultivated naivite required by the scientific method in analysing data. For scientific knowledge to grow: at least within the specific area of research, the scientist can’t afford the luxury of preconceived opinion.
However, scientists are human, and must be understood in human terms. Evidence may be objective, but under the white coats of those determining its value lie some very human reactions: and so human rules apply to interpretation of evidence. Just like every other human being, scientists do make judgements based on academic opinion, party platform, grant contribution, advertising copy, even opponents’ motivations. Many of these factors may be subconscious in their influence, but some decisions may be taken consciously. After all, financing structures encourage results, rather than patience and the willingness to consider that an initial approach or hypothesis may not stand up to objective scrutiny. In science (as in so much of the rest of the world), there are no points for second place.
At the extreme, the protected agenda has become a core part of the scientist’s own self-identity. In this case, any clash of individual Weltanschauungen with that of other persons becomes a zero-sum game: every expression of contradiction or even implied contradiction must be immediately and relentlessly quashed – as a threat to selfdom. Necessarily, such threats must include any expression or action by another suggesting that any world-constructions other than one’s own can have even a scrap of validity, since granting any validity to an alien Weltanschauungen threatens and takes away from one’s own selfdom. At this extreme, the hypothesis is a hypothesis in name only: and anything which challenges it or does not derive from the accepted channels of evidence which already support it cannot be allowed to have validity – is not “real”, cannot be allowed to be real. Thus, every effort must be made to discredit alternate hypotheses: with the maximum effort possible based on one’s ability to exert power over one’s social environment.
As confirmational bias enters the scientific literature and permeates the scientific community it gradually morphs into consensual bias, where fewer and fewer voices manage to speak up against or even to test what has now become the societal dominant “truth”.
Behind each media theory-of-the-hour focus exist several theories capable of explaining observed data: many forced into obscurity solely because they challenge the scientific status quo. Traditionally such obscurity has been easy to accomplish. Papers are or are not accepted to journals and conferences based as much on the reputation of their authors (and of the theories supported by those authors) as on scientific merit. Papers contrary to current scientific canon are much less likely to be accepted. Not only is it more difficult to disprove absolutely than to support marginally, but more rigorous testing and stricter p-values are commonly (if unconsciously) demanded of negative papers than of positive ones. Nor does the negative result usually achieve patentable breakthroughs – and may indeed discourage continued use of the old – while grant monies, public and private, continue to be contingent on established reputation, as measured by published papers.
Yet editors are human too, as much trapped within their reputations as those seeking to publish revolutionary results: for it is upon the editor that any misstep will rebound. Peer review circumvents the editor’s absolute veto by dividing the paper’s fate among one to three peer reviewers, selected from among those with established reputations. Certainly those who have accomplished the most and earned a credible reputation would normally be most capable of making the best assessments: were true objectivity possible. Instead, peer review creates only the illusion of objectivity, and adds to it the illusion of objective consensus. It does not grant a paper a more objective review. Yet many scientific consensus guidelines are not even peer-reviewed – and so are especially vulnerable to advertisers keeping the journal solvent.
The Internet has changed these rules. While what is acceptable for publishing need no longer be decided by a scientific clique, the reader must accept the presence of fallacy as well as potentiality. Much of what is on the Internet is total myth – but some is not. We risk drowning in information – but complete suppression of ideas is no longer viable. Except, perhaps, by public opinion: as shaped by the scientific consensus and the media sanction.
In this “free-for-all” environment it becomes one’s own responsibility to become a discriminating reader, teaching oneself to evaluate what one reads. Not all debunking is the result of persecution: but one should remember precisely what is meant by “theory.” A hypothesis expresses potential results. A theory is a more widely casting hypothesis, proposed as a “best fit” explanation for the observed evidence. It may be developed empirically to explain obtained results, or built on the groundwork of previous theories. Future experiments may support the theory, or they may disprove it. The greater the number of experiments showing support, the more likely the theory becomes: yet it must remain theory, a human explanation for human observation of objective evidence. No experiment can prove, absolutely. It can only provide support for one explanation or another, or, potentially, provide support against existing explanations.
Yet the stakes behind the validity of a scientific theory can be much more serious than just about any other discipline: too serious, perhaps, to allow theory to dogmatise into scientific canon on the sole basis of who supports which party, or who stands to gain. Not to take into account a potential worst case scenario is nothing less than criminal: but to prepare for the worst is not the same thing as automatically accepting that worst theory as fact. Political rhetoric may not alter scientific evidence, but to assume that scientific evidence interpreted as supporting a specific theory is identical to equating that theory with fact is worse than irresponsible.
Is it not by refusing to acknowledge existing subjective lenses of confirmational and consensual bias that we most stunt our own potential?