Since the 17th century, faith in scientific method as the path to rational truth has grown rapidly over superstition and religious dogma. At the turn of the 20th century logical positivism became very popular and scientific progress was increasingly understood via an inductive verificationist model of understanding.
This rational progress towards objective truth began with scientists making independent and objective observations and measurements of phenomena. They then formed hypotheses based on their observations to explain the phenomena. These hypotheses were then used to predict new observations. Further experiments were then designed to test the hypotheses, which if stood up to repeated experiments and the test of time became scientific theories and eventually scientific laws, e.g. Newton’s laws of motion. In this way science continued to verify new hypotheses and theories and we became ever close to the truth. This understanding of scientific progress is how most people view science right up to today. Science is perceived as a rational method for obtaining objective truth of phenomena by verifying strong theories using observation and experimentation.
However there has been much philosophical debate over the years on the legitimacy of induction as a scientific method. A key figure in the discussion is the Scottish philosopher David Hume, who in his “Enquiry concerning human understanding,” brings up what is famously known as the problem of induction. Hume claimed that induction could not actually be rationally justified and was just a matter of human habit as we got used to finding unobserved cases to be like observed ones.
Induction presupposes what can be known as the uniformity of nature. This is the assumption that objects/phenomena we haven’t yet observed will behave or act in relevant aspects to similar objects/phenomena we have observed. However Hume argued firstly that there is no necessary connection between cause and effect. For example there is no “logical” contradiction for one ball to hit another with no movement of the second, we just induce that it will because it has before. Likewise there is no logical contradiction in the sun not rising tomorrow. Again we just reason inductively that it will rise because it always has done in the past. A non-uniform universe however is perfectly conceivable and is not illogical in itself. Surely such subjectivity can not form the basis of scientific method. It also became clear to Hume and many others that any solution to the problem of induction relied on the inductive principle itself. Any such justifications therefore were circular and rather unconvincing.
One solution was science’s pragmatism via inductive methods, i.e. that science works! If induction was wrong then how could we have achieved so much from the knowledge gained by it; such as the curing of diseases and the creation of great technology? However we only know that science works via induction, i.e. that it has worked in the past. It is not logically inconceivable that it will fail in the future.
J.S.Mill in “A System of Logic” claims that a statement like; “unobserved cases are like observed cases” is a “first principle” or “axiom.” The uniformity of nature has held up till now according to our experiences and it will continue to do so. But this again is only known through induction and therefore fails as a logical justification. How do we know that the uniformity of nature will not stop in the future? It seemed that even inductivism by a more sophisticated understanding could only make theories probable and not objective. This is not how scientists nor many people wanted to understand rational scientific progress. How then could this problem be resolved?
In 1935, the Austrian philosopher Karl Popper introduced an alternative to the inductive verificationist position in his ground breaking “Logik der Forschung” (“The logic of Scientific Discovery”), and in a later paper “Conjectures and Refutations.” Popper accepted the problem of induction but thought it irrelevant to the logic of science. Rather than trying to verify theories using induction, Popper claimed it was possible to deductively falsify theories. He believed that to follow inductivism, and write down observations or take measurements made no sense unless a hypothesis or theory was there to be tested.
By a method of falsification bold conjectures are put forward such as “the world is round” from which hypotheses can be deduced. Many experiments are then carried out and if the theory survives them it becomes increasingly stronger until such a time that it is falsified by later experiments. If any hypothesis is falsified by empirical evidence then it is scrapped and a new bold conjecture is formed to be put under test. By this method theories are deductively falsified taking us ever closer to the truth. This deductive method can be understood simply as (1) T p. (2) p. (3) T. Where T’ is the theory or hypothesis and p’ is the predicted observation.
For example if we have a theory that all swans are white’ because all past observed swans have been white, and we then find a black swan, then we can deduce that the theory is false and a new theory can be put forward: all swans are black or white.’
Part of Popper’s reasons for coming up with falsification was that he wanted to make a distinction between what he called pseudo-science’ and real science. He didn’t think pseudo-science such as Freudian psychoanalysis and Marxist economics was real science because it was immune from falsification. Supporters of the theory would invent some unfalsifiable justification to any anomalies and then create an ad hoc modification to their theory. For Popper it wasn’t science as a critical activity from an independent observer. Scientific theories were only real scientific theories if they could be falsified, and should be abandoned if they were falsified by empirical evidence at any time. Better theories were more falsifiable theories in that they would be open to more scientific tests and experiments.
Popper’s falsification defended deductivism, it was possible to objectively falsify theories and the problem of induction was solved for him via this understanding. According to falsification we shouldn’t believe our scientific theories are objective just put them forward as bold conjectures till they are falsified. However falsification does have its problems and may not actually be the most accurate model of scientific progress despite its appealing deductive method.
To start with it is not entirely clear that Popper completely solved the problem of induction. Falsification gives us no firm logical basis for preferring one theory over another. One can argue that an unfalsified theory is just as capable of being wrong as another. Claiming a well confirmed theory is better cannot go hand in hand with the problem of induction because it may be rejected in later experiments. Popper defended this position by claiming something known as verisimilitude. This was to say that a well confirmed theory was nearer’ to the truth than a less confirmed one, particularly if it made more falsifiable conjectures. Perhaps nearness to the truth is all that science is capable of then.
Another problem arises from the practical application of falsification. When do we take anomalies to be significant enough evidence for the falsification of a theory? Anomalies frequently come up in experiments and are not always taken as a great enough reason to refute a theory. Anomalies are often ignored and the theory remains unfalsified. It is very difficult for scientists to be sure when a theory should be refuted. Perhaps Popper’s understanding of falsification as, ((1) T p. (2) p. (3) T. Where T’ is the theory or hypothesis and p’ is the predicted observation,) is too simple. Experiments are only conclusive if we can be sure there are no interfering factors that have not been looked at causing anomalies in predictions.
It also seems that many of our very successful theories would have been thrown out if we understood science by rigorous falsification. Newton for example couldn’t explain elliptical planetary movements because he didn’t know about other planet’s gravitational pull. But it seems his theory was kept despite the anomalies that were found. Falsification doesn’t seem like a realistic model of what scientists actually do and the way science actually progresses. How often do scientists actually try to falsify their theories?
The 20th century philosopher of science Imre Lakatos claimed that all scientific progress consisted of different research programmes. Newton’s theory of gravity, Einstein’s theory of relativity and Freund’s theory of psychoanalysis are all examples of research programmes. Each research programme contains “hard-core” assumptions which all members of that research programme accept; “negative heuristics,” a rule that forbids to change the assumptions of the “hardcore;” and “positive heuristics,” a set of less specified assumptions which are used to decide how the theories should develop. Each programme attempts to build its theories in the way the scientific community sees fit. He claimed that scientists do not try and falsify their theories; they work to strengthen their theories further. The “hardcore” of the research programme is always defended and any anomalies that crop up are explained using assumptions of the “hardcore”. Popper’s idea of science as an independent, objective activity was rather nave according to Lakatos.
Thomas Kuhn, another influential 20th century philosopher of science, recognised this theory laden nature of observation. The idea of theory neutrality was a very inaccurate view of scientific method. Data is invariably contaminated by theoretical assumptions. In 1962 Kuhn published “The Structure of Scientific Revolutions.” He saw scientific progress not as a steady progression towards the truth, but as a series of relatively sudden changes in the world view of science. The Copernican astronomical revolution, the Einsteinian physics revolution, and the Darwinian revolution in biology are all examples of these changes in scientific world view.
Normal science is the ordinary day to day activities that scientists engaged in when their discipline are not undergoing revolutionary change. These normal scientists belong to a particular paradigm. In the paradigm scientists agree on a set of fundamental assumptions which are never questioned, similar to Lakatos’ “hardcore” theories. They also share a set of scientific exemplars which have been solved by means of the theoretical assumptions that appear in the textbooks of the disciplin; and they generally agree on the direction of the research. In effect the paradigm is an entire scientific outlook.
However over time anomalies build up. The job of the normal scientist is to try and eliminate these problems with as little change to the paradigm as possible. But sometimes the anomalies accumulate significantly and certain phenomena cannot be explained by the theories causing a crisis in the paradigm. Confidence in the existing paradigm breaks down and the period of normal science grinds to a halt. A period of revolutionary science begins and new theories are postulated and eventually established. This ends the revolutionary period. A paradigm shift has occurred and after a few years the entire scientific community accept the new outlook.
For Kuhn truth is paradigm relative. Two paradigms may be so completely different as to render any straightforward comparison impossible. There is no common language to which they can both be translated. In a sense they are metaphorically living in different worlds. Theories and their related concepts cannot be understood independently from the paradigm which they are embedded in. Scientific change then, far from being a progression towards the truth, is in a sense directionless; later paradigms are just different. To call one idea better implies a common framework for evaluating them which Kuhn denies. There also seems to be some understanding by Kuhn that sociological and political factors play a part in deciding what direction new paradigms follow. The idea of scientific progress as a critical activity by independent and theory neutral scientists is ignorant of the way science has developed in the past, and Kuhn takes the history of scientific progress into account.
This scientific relativism is very difficult for most to accept. Surely science is more than just relative subjectivity. Perhaps some statements like, “May 14th the sun rose at 7.10am in London,” may be theory neutral from paradigms. However Kuhn is not trying to cast doubt on the rationality of science, but trying to offer a more historically accurate picture of scientific progress. This is something which Karl Popper’s falsification doesn’t seem to do. Even within periods of normal science in specific paradigms it doesn’t seem that falsification is a very accurate model of scientific progress. Scientists try and defend their theories by making specific proposals and testing them, they don’t set out to falsify their theories. Anomalies are often ignored and sometimes scientists may even manipulate data to protect their hypothesis. Of course providing their experiments have been published it is possible for other scientists to attempt to falsify results in repeated experiments, but scientists tend to stick to their own research projects.
So it seems that in reality although Popper’s model of deductively falsifying theories, might make science better by linking logic with empiricism, as opposed to verifying theories which can’t be done because of the problem of induction; it isn’t really the way science progresses. Something like Lakatos’ idea of research programmes seems to be the way most scientists work in periods of normal science. It also cannot be denied that there have been revolutionary periods of science; paradigm shifts as Kuhn calls them. However it is un clear as to whether these shifts in the world view may stop occurring when our scientific knowledge allows us to explain every phenomena we possibly can and whether then truth really is completely paradigm relative.