Inductivism

Inductivism is the traditional model of scientific method attributed to Francis Bacon, who in 1620 vowed to subvert allegedly traditional thinking. In the Baconian model, one observes nature, proposes a modest law to generalize an observed pattern, confirms it by many observations, ventures a modestly broader law, and confirms that, too, by many more observations, while discarding disconfirmed laws. The laws grow ever broader but never much exceed careful, extensive observation. Thus freed from preconceptions, scientists gradually uncover nature's causal and material structure.

At 1740, David Hume found multiple obstacles to use of experience to infer causality. Hume noted the illogicality of enumerative induction—unrestricted generalization for particular instances to all instances, and stating a universal law—since humans observe sequence of sensory events, not cause and effect. Humans thus perceive neither logical nor natural necessity or impossibility among events. Later philosophers would select, highlight, and nickname Humean principles—Hume's fork, problem of induction, and Hume's law—although Hume accepted the empirical sciences as inevitably inductive, after all.

Alarmed by Hume's seemingly radical empiricism, Immanuel Kant identified its apparent opposite, rationalism, as favored by Descartes and by Spinoza. Seeking middle ground, Kant identified that the necessity bridging the world in itself to human experience is the mind, whose innate constants thus determine space, time, and substance and determine the correct scientific theory. Though protecting both metaphysics and Newtonian physics, Kant discarded scientific realism by restricting science to tracing appearances (phenomena), not unveiling reality (noumena). Kant's transcendental idealism launched German idealism—increasingly speculative metaphysics—while philosophers continued awkward confidence in empirical sciences as inductive.

Refining Baconian inductivism, John Stuart Mill posed his own five methods of discerning causality to describe the reasoning whereby scientists exceed mere inductivism. In the 1830s, opposing metaphysics, Auguste Comte explicated positivism, which, unlike Baconian model, emphasized predictions, confirming them, and laying scientific laws irrefutable by theology or metaphysics. Finding experience to show uniformity of nature and thereby justify enumerative induction, Mill accepted positivism: the first modern philosophy of science, which, simultaneously, was a political philosophy whereby only scientific knowledge was reliable knowledge.

Nearing 1840, William Whewell thought that the inductive sciences, so called, were not so simple, after all, and asked recognition of "superinduction", an explanatory scope or principle invented by the mind to unite facts, but not present in the facts. Mill would have none of hypotheticodeductivism, posed by Whewell as science's method, which Whewell believed to sometimes, via other considerations upon the evidence, render scientific theories of known metaphysical truth. By 1880, C S Peirce had clarified the basis of deductive inference and, although recognizing induction, proposed a third type of inference that Peirce called "abduction", now otherwise termed inference to the best explanation (IBE).

Since the 1920s, although opposing all metaphysical inference via scientific theories, the logical positivists sought to understand scientific theories as provably false or true as to strictly observations. Though accepting hypotheticodeductivism to originate theories, they launched verificationism whereby Rudolf Carnap tried but never succeeded to formalize an inductive logic whereby a universal law's truth with respect to observational evidence could be quantified as "degree of confirmation". Asserting a variant of hypotheticodeductivism termed falsificationism, Karl Popper from the 1930s onward was the first especially vocal critic of inductivism and verficationism as utterly flawed models of science. In 1963, Popper declared that enumerative induction is a myth. Two years later, Gilbert Harman claimed that enumerative induction is a masked effect of IBE.

Thomas Kuhn's 1962 book—explaining that periods of normal science as but a paradigm of science are each overturned by revolutionary science whose paradigm becomes the normal science anew—dissolved logical positivism's grip in the Anglosphere, and inductivism fell. Besides Popper and Kuhn, other postpostivist philosophers of science—including Paul Feyerabend, Imre Lakatos, and Larry Laudan—have all but unanimously rejected inductivism. Among them, those who have asserted scientific realism—that scientific theory can and does offer approximately true understanding of nature's unobservable aspects—have tended to claim that scientists develop approximately true theories about nature through IBE. And yet IBE, which, so far, cannot be trained, lacks particular rules of inference. By the 21st century's turn, inductivism's heir was Bayesianism.[1]

Scientific method

Until the 1960s, in a tradition traced to Francis Bacon at 1620, inductivism was presumed to be scientific method's manner.[2] Even at the 21st century's turn, popular presentations of scientific discovery and progress suggest Baconian model.[3] Until the 20th century, scientists generally well knew their own philosophies, however.[4] Einstein remarked, "Science without Epistemology is—in so far as it is thinkable at allprimitive and muddled".[4]

The past century was the first producing scientists, not philosopherscientists.[4] Frequently unable to defend their works from intellectual attacks, scientists also generally cannot optimize methods and productivity.[4] Future scientific breakthroughs ought to be produced more by scientists who have mastered both their own specialties and basics of philosophy of science, including method.[4] Major variants upon scientific method are inductivism and hypotheticodeductivism.[5]

Inductivism

Inductivism infers from observations of similar effects to similar causes, and generalizes unrestrictedly—that is, by enumerative induction—to a universal law.[5] Extending inductivism, Comtean positivism explicitly aims to oppose metaphysics, shuns imaginative theorizing, emphasizes observation, then making predictions, confirming them, and stating laws.

HD model

Hypotheticodeductivism introduces some explanation or principle from any source, such as imagination or even dreams, infers logical consequences of it—that is, deductive inferences—and compares them with observation, perhaps experimental.[5] In simple or Whewellian hypotheticodeductivism, one might accept a theory as a true or probably true if its predictions withstand testing and it meets other considerations, such as consilience.[6] Yet the falsificationism variant of HD model forbids ever inferring the truth of theory, whether as to axioms, laws, or principles.

Affirming

Inductivism as well as its positivist extension and Whewellian hypotheticodeductivism, too, rely on the deductive fallacy of affirming the consequentIf A, then B; indeed B; therefore A[7]—illogical, since even if B is observed, A could be consequence instead of X or Y or Z, or XYZ combined, as A is but one possibility among potentially infinite. And even if B is observed, A still could be consequence of X or Y or Z, or XYZ combinated, or AX or AY or ZY combined, or AXYZ combined. Or the sequence A trailed by B could be consequence of U, simply constant conjunction but not causality.

Uncertainty

No confirmations of an explanation's predictions verify the explanation true, since any phenomenon can host multiple logically possible explanations—the problem of underdetermination—leaving the move from data to theory lacking any formal, that is, logical, rules of inference. Also, we can readily find confirming instances of a theory's predictions even if most of the theory's predictions are false. As observation is laden with theory, scientific method cannot ensure that one will perform experiments inviting disconfirmations, or even notice incompatible findings. Even if they are noticed, the experimenter's regress permits one to discard, while ontological relativity permits one to reinterpret, them.

Denying

Guiding axioms as rules of inference, postulates are principles accepted without proof, themselves, and that if followed lead to conclusions upon input of information. Any number of logically invalid and even empirically false explanations can be maintained by deductive inference from postulates. A natural deductive reasoning form, rather, is logically valid without postulates, is true by simply the principle of nonselfcontradiction. Falsificationism is hyotheticodeductivism restricted to the natural deductive form denying the consequentIf A, then B; not B; thus not A—logically valid, while confirmed predictions and other considerations never justify belief in a theory as true or probably true, simply corroborate the theory.[8]

Inductivist reign

Bacon

In 1620 in England, Francis Bacon's Novum Organum alleged that scholasticism's Aristotelian method of deductive inference via syllogistic logic upon traditional categories was impeding society's progress.[9] Admonishing the alleged classic induction for proceeding immediately from "sense and particulars up to the most general propositions", then deducing generalizations onto new particulars without empirically verifying them,[10][11] Bacon stated the "true and perfect Induction".[10] In Bacon's inductivist method, a scientist—at the time, a natural philosopher—ventures an axiom of modest scope, makes many observations, accepts the axiom if it is confirmed and never disconfirmed, then ventures another axiom only modestly broader, collects many more observations, and accepts that axiom, too, only if it is confirmed, never disconfirmed.[10]

In Novus Organum, Bacon used the term hypothesis rarely, and usually then in pejorative senses, as prevalent in Bacon's day.[12] Yet Bacon's term axiom is more similar now to hypothesis than to law,[12] which today is nearer to the synonym of axiom, a rule of inference. By the 20th century's close, historians and philosophers of science generally agreed that Bacon's actual counsel was far more balanced that traditionally thought, although some assessment went so far indicating that Bacon was a falsificationist, presumably as far from inductivism as one can get.[12] Yet, Bacon was not a strict inductivist and included aspects of hypotheticodeductivism,[12] those aspects of Bacon's model were long glossed over by others,[12] and thr "Baconian model" was regarded as true inductivism—which mostly it was.[13]

During this repeating process of modest axiomatization confirmed by extensive and minute observations, axioms expand in scope and deepen in penetrance tightly in accordance with all the observations—opening clear and true view of nature as it exists independently of human preconceptions—as the general axioms among observables render matter's unobservable structure and nature's causal mechanisms perceptible.[14] As Bacon provides no clear way to frame axioms, let alone develop principles or theoretical constructs universally true, researchers might observe and collect data endlessly.[10] For this vast venture, Bacon's advised precise record keeping and collaboration among researchers—a vision resembling today's research institutes—while the true understanding of nature would permit technological innovation, heralding a New Atlantis.

Newton

Modern science arose against Aristotelian physics,[15] Both geocentric were Aristotelian physics and Ptolemaic astronomy, which latter was a basis of astrology, a basis of medicine. Nicolaus Copernicus proposed heliocentrism, perhaps to better fit astronomy to Aristotelian physics' fifth element—universal essence, or quintessence, or aether—its intrinsic motion of perpetual, perfect circles. Yet Johannes Kepler modified Copernican orbits to ellipses, soon after Galileo Galilei's telescopic observations disputed the Moon's composition of aether, and his experiments with earthly bodies attacked Aristolelian physics. Galilean principles were subsumed by René Descartes, whose Cartesian physics structured his cosmology, modeling heliocentrism and employing mechanical philosophy—whose first principle, stated by Descartes, was No action at a distance—as termed by chemist Robert Boyle who, seeking for his own discipline a mechanical basis via corpuscularism, sought chemistry's divorce from alchemy.

In 1666, Isaac Newton fled London from the plague.[16] Isolated, he applied rigorous experimentation and mathematics, including development of calculus, and reduced both terrestrial motion and celestial motion—both physics and astronomy—to one theory stating Newton's laws of motion, several corollary principles, and law of universal gravitation, set in a framework of postulated absolute space and absolute time. Newton's unification of celestial and terrestrial phenomena overthrew vestiges of Aristotelian physics, and disconnected physics from chemistry, which each then followed its own course.[16] Newton became the exemplar of the modern scientist, and the Newtonian research program became the modern model of knowledge.[16] Although absolute space, revealed by no experience, and a force acting at a distance discomforted Newton, he and physicists for some 200 years more would seldom suspect the fictional character of the Newtonian foundation, as they believed not that physical concepts and laws are "free inventions of the human mind", as Einstein in 1933 called them, but could be inferred logically from experience.[17] Supposedly, Newton maintained that toward his gravitational theory, he had "framed" no hypotheses.

Hume

At 1740, Hume aggressively sorted truths into two, divergent categories—"relations of ideas" versus "matters of fact and real existence"—as later termed Hume's fork. "Relations of ideas", such as the abstract truths of logic and mathematics, known true without experience of particular instances, offer a priori knowledge. Yet the quests of empirical science concern "matters of fact and real existence", known true only through experience, thus a posteriori knowledge. As no number of examined instances logically entails the conformity of unexamined instances, a universal law's unrestricted generalization bears no logical basis, but one justifies it by adding the principle uniformity of nature—itself unverified, thus a major induction to justify a minor induction—an obstacle to empirical science later termed the problem of induction.[18]

For Hume, humans experience sequences of events, not cause and effect, by pieces of sensory data whereby similar experiences might exhibit constant conjunctionfirst an event like A, and always an event like B—but there is no revelation of causality to reveal either necessity or impossibility.[19][20] Although Hume apparently enjoyed the scandal that trailed his explanations, Hume did not view them as fatal,[19] and found enumerative induction to be among the mind's unavoidable customs, required in order for one to live.[21] Rather, Hume sought to counter Copernican displacement of humankind from the Universe's center, and to redirect intellectual attention to human nature as the central point of knowledge.[22]

Hume proceeded with inductivism not only toward enumerative induction but toward unobservable aspects of nature, too. Not demolishing Newton's theory, Hume placed his own philosophy on par with it, then.[23] Though skeptical at common metaphysics or theology, Hume accepted "genuine Theism and Religion" and found a rational person must believe in God to explain the structure of nature and order of the universe.[24] Still, Hume had urged, "When we run over libraries, persuaded of these principles, what havoc must we make? If we take into our hand any volume—of divinity or school metaphysics, for instance—let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion".[25]

Kant

Awakened from "dogmatic slumber" Hume's work, Immanuel Kant sought to explain how metaphysics is possible.[25] Kant's 1781 book introduced the distinction rationalism, whereby some knowledge results not by empiricism, but instead by "pure reason". Concluding it impossible to know reality in itself, however, Kant discarded the philosopher's task of unveiling appearance to view the noumena, and limited science to organizing the phenomena.[26] Reasoning that the mind contains categories organizing sense data into the experiences substance, space, and time,[27] Kant thereby inferred uniformity of nature as knowledge a priori.[28]

Kant sorted statements into two types: the analytic and the synthetic. The analytic are true by their terms' arrangement and meanings—thus are tautologies, merely logical truths, true by necessity—whereas the synthetic arrange meanings to refer to factual states, which are contingent. Yet a class of synthetic statements was contingent but, through the mind, true by necessity.[20] Thus discovering the synthetic a priori, Kant precariously saved both physics—at the time Newtonian—and metaphysics, but incidentally discarded scientific realism and developed transcendental idealism, which triggered German idealism, including G F W Hegel's absolute idealism.[26][29]

Positivism

Comte

In the French Revolution's aftermath, fearing Western society's ruin again, Auguste Comte was fed up with metaphysics.[30] As suggested in 1620 by Francis Bacon,[31] developed by Saint-Simon, and promulgated in the 1830s by his former student Comte, positivism was the first modern philosophy of science.[32] Human knowledge had evolved from religion to metaphysics to science, said Comte, which had flowed from mathematics to astronomy to physics to chemistry to biology to sociology—in that order—describing increasingly intricate domains, all of society's knowledge having become scientific, as questions of theology and of metaphysics were unanswerable.[33] Comte found enumerative induction reliable upon experience available, and asserted science's use as improving human society, not metaphysical truth.[31]

According to Comte, scientific method constrains itself to observations, but frames predictions, confirms them, and states laws—positive statements—irrefutable by theology or by metaphysics, and laid as foundation for subsequent knowledge.[31] Later, concluding science insufficient for society, Comte launched Religion of Humanity, whose churches, honoring eminent scientists, led worship of humankind.[31][32] Comte coined the term altruism,[32] and emphasized science's application for humankind's social welfare, which would be revealed by Comte's spearheaded science, sociology.[31] Comte's influence is prominent in Herbert Spencer of England and Émile Durkheim of France establishing modern empirical and functionalist sociology.[34] Influential in the latter 19th century, positivism was often linked to evolutionary theory,[31] yet was eclipsed in the 20th century by neopositivism: logical positivism and logical empiricism.[32]

Mill

J S Mill thought, unlike Comte, that scientific laws were susceptible to recall or revision.[31] And Mill withheld from Comte's Religion of Humanity.[31] Still, regarding experience to justify enumerative induction by having shown uniformity of nature,[28] Mill was fond of Comte's positivism.[31][34] Mill noted that within the empirical sciences, the natural sciences well surpassed Baconian model, too simplistic, whereas the human sciences, such as moral and political inquiry, had not attained even Baconian scrutiny of immediate experience and enumerative induction.[11] Similarly, economists of the 19th century tended to pose explanations a priori, and reject disconfirmation by posing circuitous routes of reasoning to maintain their a priori laws.[35] In 1843, Mill's A System of Logic introduced Mill's methods,[36] explaining the five principles whereby causal laws can be identified to enhance the empirical sciences as the inductive sciences.[34] For Mill, all explanations have the same logical structure, while society can be explained by natural laws.[34]

Social

In the 17th century, England had taken the lead in science, which shifted to France in the 18th, then to Germany in the 19th—and to America in the 20th—transitions affecting each country's contemporary roles for science.[34] Before Germany had taken science's lead, France led in science immediately before the French Revolution [34] Amid the social crisis of its aftermath, Comte inferred that society's natural condition is order, not change.[34] As in Saint-Simon's industrial utopianism, Comte's vision—later well expressed by modernity—posed science as the only true, objective knowledge and thus as industrial society's secular spiritualism, too, whereby science is the political and ethical guide.[34]

Positivism arrived in Britain well after science's lead had departed.[34] British positivism, found in Victorian ethics of utilitarianism, for instance J S Mill's, and later in the social evolutionism of Herbert Spencer, associated science with moral improvement, but rejected science as political leadership.[34] For Mill, all explanations held the same logical structure—thus, society could be explained by natural laws—yet Mill criticized "scientific politics".[34] From its outset, then, sociology was pulled between moral reform versus administrative policy.[34]

Spencer helped popularize the word sociology in England, and compiled vast data aiming to build general theory through empirical analysis.[34] Spencer's 1850 book Social Statics shows Comtean as well as Victorian concern for social order.[34] Yet whereas Comte's social science was social physics, Spencer would take biology—via Darwinism, so called, which arrived in 1859—as the model of science.[34] Spencer's functionalist-evolutionary account identified social structures as functions that adapt, thus explaining social change.[34]

In France, Comte's sociology influence was prominent with Émile Durkheim, whose 1895 Rules for the Sociological Method also posed natural science as sociology's model.[34] For Durkheim, social phenomena are social functions without psychologism—that is, lacking consciousness of individuals—while Durkheim's sociology was antinaturalist, in that social facts differed from natural facts.[34] Still, social representations were real entities to be examined, without prior theory, by assessing raw data and discovering causal laws, according to Durkheim.[34] Durkheim's sociology was a realist and inductive science where theory would trail observations, while method proceeded from social facts to hypotheses to general laws—their priority being their causal accord—identified inductively.[34]

Logical

World War erupted in 1914 and closed in 1919 with a treaty including a plan of reparations that British economist John Maynard Keynes immediately and vehemently predicted had the consequence of crumbling German society via hyperinflation—prediction fulfilled by 1923.[37] Upon 29 May 1919's solar eclipse, Einstein's gravitational theory apparently overthrew Newton's—a revolution in science by astonishing prediction[38]—a revolution bitterly resisted by many scientists but completed nearing 1930.[39] An yet race science flourished, appearing quite scientific to many, not obviously pseudoscience,[40] while overtaking medicine and public health[41] with excesses of negative eugenics.[42]

In the 1920s, philosophers and scientists in Berlin Circle and Vienna Circle were appalled by the flaring nationalism, racism, and bigotry, yet perhaps no less by countermovements toward metaphysics, intuitionism, and mysticism.[43][44] Inspired by developments in philosophy,[45] mathematics,[46] logic,[47] and physics,[48] they sought to give the world a universal, transparent, truly scientific language whereby falsity or truth could be verified either logically or empirically, no more confusion and madness.[44] Seeking radical reform of philosophy to convert it into scientific philosophy emulating empirical sciences to become a special science,[49][50] they called themselves the logical positivists.

Vienna Circle, led by Moritz Schlick, included Otto Neurath, and was converted to logical positivism by Rudolph Carnap, introduced to Schlick by Hans Reichenbach, leading Berlin Circle, whom Carl Hempel, later of Vienna Circle, had studied with.[51][52] Rejecting Kant's synthetic a priori, they asserted Hume's fork[53] and staked it at the analytic/synthetic gap to dissolve confusions by freeing language from "pseudostatements", added the verifiability criterion—that only statements logically or empirically verifiable are cognitively meaningful—and presumed a semantic gulf between observational versus theoretical terms.[54] Withholding credence from science's claims about unobservable aspects of nature,[55] thus rejecting scientific realism,[56] they embraced instrumentalism, whereby scientific theory is simply useful to predict human observations,[56] while sometimes regarding talk of unobservables as either metaphorical[57] or meaningless.[58]

Pursuing both Bertrand Russell's logical atomism deconstructing language into supposedly elementary parts and Russell's logicism reducing swaths of mathematics to symbolic logic, logical positivists envisioned both everyday language and mathematics—thus physics, too—sharing a logical syntax in symbolic logic. To gain cognitive meaningfulness, theoretical terms would be translated, via correspondence rules, into observational terms—revealing theories' empirical claims—and then empirical operations would verify them within the observational structure, related to the theoretical structure through the logical syntax whereby a logical calculus could verify the theory's falsity of truth. With this program termed verificationism, logical positivists battled Marburg school's neoKantianism, Husserlian phenomenology, and, as the very epitome of their opposition, Heidegger's "existential hermeneutics", accused by Carnap of the most flagrant "pseudostatements".[44][51]

Opposition

In friendly spirit, Vienna Circle's Otto Neurath nicknamed Karl Popper, a fellow philosopher in Vienna, the "Official Opposition".[59] Popper asserted that any effort to verify a scientific theory, or even to inductively confirm a scientific law, was fundamentally misguided.[59] Popper asserted that although exemplary science not dogmatic, science inevitably relies on "prejudices". Popper accepted Hume's criticism—the problem of induction—as making verification logically impossible. Popper accepted hypotheticodeductivism, sometimes termed it deductivism, but restricted it to denying the consequent, and thereby, refuting verificationism, reframed it as falsificationism. As to law or theory, Popper found confirmation of probable truth untenable,[59] as any number confirmations is finite: empirical evidence approaching 0% probability of truth, as a universal law's predictive run is infinite. In fact, Popper found that a scientific theory is better if its truth is more improbable.[60] Popper asserted that logical positivism "is defeated by its typically inductivist prejudice".[61]

Problems

Having highlighted Hume's problem of induction, John Maynard Keynes posed logical probability as its answer—but then figured not quite.[62] Bertrand Russell found Keynes's Treatise on Probability the best examination of induction, and if read with Jean Nicod's Le Probleme logique de l'induction as well as R B Braithwaite's review of that in the October 1925 issue of Mind, to provide "most of what is known about induction", although the "subject is technical and difficult, involving a good deal of mathematics".[63]

Rather than validate enumerative induction—the futile task of showing it a deductive inferenceHerbert Feigl as well as Hans Reichenbach, apparently independently, sought to vindicate it by showing it simply useful, either a "good" or the "best" method for the goal at hand, making predictions.[64] Feigl posed it as a rule, thus neither a priori nor a posteriori but a fortiori.[64] Reichenbach's treatment, similar to Pascal's wager, posed it as entailing greater predictive success versus the alternative of not using it.[64]

In 1936, Rudolf Carnap switched the goal of verification, clearly impossible, to confirmation,[52] while A J Ayer identified two types of verification—strong versus weak—the strong impossible, but weak attained when a statement is probable.[65] Carnap sought to formalize inductive logic through probability as "degree of confirmation".[52] Employing abundant logical and mathematical tools, yet never attaining the goal, Carnap's formulations of inductive logic always held a universal law's degree of confirmation at zero.[52]

Kurt Gödel's incompleteness theorem of 1931 had made the logical positivists' logicist reduction doubtful, and Alfred Tarski's undefinability theorem of 1934 made it hopeless.[66] Some, including Carl Hempel, still argued that logicism is possible,[66] since, for instance, nonEuclidean geometry had shown that even the truths of geometry are via axioms among postulates. As to formalism, rather—which coverts talk to logical forms and axioms but does not reduce it to logic—neopositivists accepted hypotheticodeductivism for theory development, but held to symbolic logic as the language to justify, by verification or confirmation, its results.[67] This was stymied by Hempel's paradox of confirmation, whereby to formalize confirmatory evidence of an hypothesized universal law, All ravens are black—logically equivalent to All nonblack things are not ravens—one could, at least in the symbolic logic, observe any nonblack thing, even a white shoe, and report a confirming instance of the law All ravens are black.[67]

Early criticism

During the 1830s and 1840s, the French Auguste Comte and the British J S Mill were the leading philosophers of science.[68] Debating in the 1840s, J S Mill claimed that science proceeds by inductivism, whereas William Whewell, also British, claimed it proceeds by hypotheticodeductivism.[5]

Whewell

William Whewell found the "inductive sciences" not so simple, but, amid the dominance of inductivism, described "superinduction".[69] Whewell proposed recognition of "the peculiar import of the term Induction", as "there is some Conception superinduced upon the facts", that is, "the Invention of a new Conception in every inductive inference". Rarely spotted by Whewell's predecessors, such mental inventions rapidly evade notice.[69] Whewell explained,

"Although we bind together facts by superinducing upon them a new Conception, this Conception, once introduced and applied, is looked upon as inseparably connected with the facts, and necessarily implied in them. Having once had the phenomena bound together in their minds in virtue of the Conception, men can no longer easily restore them back to detached and incoherent condition in which they were before they were thus combined".[69]

These "superinduced" explanations may well be flawed, but their accuracy is suggested when they exhibit what Whewell termed consilience—that is, simultaneously predicting the inductive generalizations in multiple areas—a feat that, according to Whewell, can establish their truth. Perhaps to accommodate prevailing view of science as inductivist method, Whewell devoted several chapters to "methods of induction" and sometimes said "logic of induction", and yet stressed it lacks rules and cannot be trained.[69] Whewell also pointed out that Bacon himself was not a strict inductivist, for Bacon had actually, said Whewell, "held the balance, with no partial of feeble hand, between phenomena and ideas".[12]

In Whewell's hypotheticodeductivism, one discovers from any source imaginable—perhaps even a dream—a model or principle of explanatory power, and then deductively infers logical consequences of it, and tests those consequences versus observation, which can include experimental outcomes.[6] If they are disconfirmed, then the theory is deductively inferred to be false, yet if they are confirmed, then perhaps, upon other explanatory considerations, the theory will be accepted as true or approximately or probably.[6]

Peirce

As had Kant noted in 1787, theory of deductive inference had not progressed since antiquity.[70] In the 1870s, C S Peirce and Gottlob Frege, unbeknownst to one another, revolutionized deductive logic through vast efforts identifying it with mathematical proof.[70] Originator of pragmatism—or, since 1905, pragmaticism, distinguished from more recent appropriations of Peirce's original term—the American Peirce recognized induction, too, but continuously insisted on a third type of inference that Pierce variously termed abduction or retroduction or hypothesis or presumption.[70] Later philosophers gave Peirce's abduction, etc, the synonym inference to the best explanation (IBE).[71] Many philosophers of science espousing scientific realism have maintained that IBE is how scientists develop approximately true scientific theories about nature.[72]

Inductivist fall

After defeat of National Socialism with the close of World War II in 1945, logical positivists lost their revolutionary zeal and led emergence of philosophy of science as a devoted subdiscipline within the Anglosphere's philosophy academia to research such questions and aspects of scientific theory and knowledge.[51] The movement shifted, thus, into a milder variant bettered termed logical empiricism or, in any case, neopositivism, led principally by Rudolf Carnap, Hans Reichenbach, and Carl Hempel.[51] Amid apparent contradictions in its central tenets—verifiability principle, analytic/synthetic distinction, observation/theory gap—Hempel in 1965 abandoned ship for a far wider conception of "degrees of significance", signaling neopositivism's official demise.[73] Neopositivism became mostly maligned,[74][75] while credit for its fall generally has gone to W V O Quine and to T S Kuhn,[51] although its "murder" was first confessed to, quite prematurely, in the 1930s by K Popper.[76]

Fuzziness

Willard Van Orman Quine's 1951 paper "Two dogmas of empiricism"—explaining semantic holism, how any term's meaning is networked to one's beliefs about the entire world—attacked Hume's fork whereby the analytic/synthetic division was supposedly unbridgeable, a principle apparently untenable.[73] Among verificationism's greatest internal critics, Carl Hempel had recently concluded the same as to the verifiability criterion, which would cast not only religious assertions and metaphysical statements, but even scientific laws of universal type, too, as meaningless.[77] In 1958, Norwood Hanson's book Patterns of Discovery subverted the supposed gap between observation and theory—how direct observation would permit neutral comparison from theory to theory—by Hanson explaining that even direct observations, the scientific facts, so called, are laden with theory guiding collection, sorting, prioritization, and interpretation of direct observations as well as the ability to apprehend a phenomenon to start with.[78] Meanwhile, as to all knowledge generally, Quine's thesis eroded foundationalism, which retreated to modesty.[79]

Revolutions

Thomas Kuhn's landmark book of 1962, The Structure of Scientific Revolutions, was first published, ironically, in a volume of the International Encyclopedia of Unified Science—a project begun by logical positivists—and somehow, at last, unified the empirical sciences by freeing them from the physics model, and calling them for assessment in history and sociology.[80] Lacking such heavy use of mathematics and logic's formal language—an approach introduced in 1920s by Rudolph Carnap—Kuhn's book, powerful and persuasive, was written in natural language open to laypersons.[80]

Structure finds science to be puzzlesolving toward a vision projected by the "ruling class" of a scientific specialty's community, whose "unwritten rulebook" dictates proper scientific problems and solutions, altogether normal science.[81] The scientists reinterpret ambiguous data, discard anomalous data, and try to stuff nature into the box of their shared paradigm—a theoretical matrix or fundamental view of nature.[81] At last, compatible data become scarce, anomalies accumulate, and "crisis" ensues.[81] Some young scientists, newly training, defect to revolutionary science—what arose by explaining both normal and anomalous data simultaneously—whose success solving the scientific problems makes it a new "exemplar", which, however, contradicts normal science.[81]

Of incompatible languages, rival paradigms are incommensurable.[81] Trying to resolve conflict, scientists talk past each other, as even direct observations—like the Sun "rising"—get differing interpretations. Some working scientists convert by a perspectival shift that—to their astonishment—snaps the new paradigm, suddenly obvious, into sight. Others, never attaining such gestalt switch, remain holdouts, committed for life to the old paradigm, but one by one die, while the new exemplar—the new, unwritten rulebook—settles in as normal science.[81] Thus, a revolution in science is fulfilled. The old theoretical matrix becomes so shrouded by the meanings of terms in the new theoretical matrix that even philosophers of science misinterpret the old one.[81]

Kuhn critically destabilized confidence in foundationalism, which was generally presumed—though erroneously—to be a key tenet of logical empiricism.[82][83] As logical empiricism was extremely influential in the social sciences,[84] Kuhn's ideas were rapidly adopted by scholars in disciplines well outside natural sciences.[80] Kuhn's thesis in turn was attacked even by opponents of positivism.[85] In Structure's 1970 postscript, Kuhn asserted that science at least has no algorithm—and on that even most of Kuhn's critics agreed.[85] Reinforcing Quine's assault, Kuhn ushered the Anglosphere's academia into postpositivism or postempiricism.[51][80]

Falsificationism

Karl Popper's 1959 book proposing falsificationism, originally published in German in 1934, reached the Anglosphere and was soon mistaken for a new type of verificationism,[76][86] yet refuted it.[76][86][87] Falsificationism's demarcation falsifiable grants a theory the status scientific—simply, empirically testable—not the status meaningful, a status that Popper did not aim to arbiter.[86] Popper found no scientific theory either verifiable or, as in Carnap's "liberalization of empiricism", confirmable,[86][88] and found unscientific, metaphysical, ethical, and aesthetic statements often rich in meaning while also underpinning or fueling science as the origin of scientific theories.[86] The only confirmations particularly relevant are those of risky predictions,[89] such as ones conventionally predicted to fail.

Postpositivism

At 1967, historian of philosophy John Passmore concluded, "Logical positivism is dead, or as dead as a philosophical movement ever becomes",[90] and it became philosophy of science's bogeyman.[75] Kuhn's thesis was attacked for portraying science is irrational, mere cultural relativism even similar to religious experience. Postpositivism's poster became Popper's view of human knowledge as hypothetical, continually growing, always tentative, open to criticism and revision.[76]

A myth?

In 1945, Bertrand Russell had proposed enumerative induction as an "independent logical principle",[91] one "incapable of being inferred either from experience or from other logical principles, and that without this principle, science is impossible".[92] And yet in 1963, Karl Popper declared, "Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure".[93] Popper's 1972 book Objective Knowledge opens, "I think I have solved a major philosophical problem: the problem of induction".[93]

Within Popper's schemaProblem1 → Tentative Solution → Critical Test → Error Elimination → Problem2—enumerative induction is "a kind of optical illusion" shrouded by steps of conjecture and refutation during a problem shift.[93] The tentative solution is improvised, an imaginative leap unguided by inductive rules, and the resulting universal law is deductive, an entailed consequence of all, included explanatory considerations.[93] Controversy continued over whether there is any way to justify—or, as by Popper, simply dissolve—enumerative induction.[94]

Some have claimed that although inductive inference is often obscured by language, as especially common in news reporting, perhaps announcing scientists' supposed experimental proof, and that such enumeraive induction ought to be tempered by proper clarification, but that inductive inference is used liberally in science—and that science even requires.[95] Actually, strong arguments on both sides[94]

Enumerative induction obviously occurs as a conclusion, but its independence is unclear, as some interpret that it derives as a deductive consequence of an underlying explanation of the observations.[96] In a 1965 paper, now classic, Gilbert Harman had explained enumerative induction as simply a masked effect of IBE,[71] which philosophers of science espousing scientific realism have usually maintained is how scientists develop, about the putative mind-independent world, theories approximately true.[72] Thus, the view that Popper was obviously wrong[95] is structured by conflicting semantics.[97]

By now, enumerative induction has been shown to exist, but is found rarely, as in programs of machine learning in Artificial Intelligence (AI).[98] Likewise, machines can be programmed to operate on probabilistic inference of near certainty.[99] Yet sheer enumerative induction is overwhelmingly absent from science conducted by humans.[98] Although much talked of is abduction or IBE, it proceeds by humans' imaginations and creativity without rules of inference, which IBE's discussants provide nothing resembling.[96][98]

Bogeyman

Popperian falsificationism, too, became widely criticized and eventually became unpopular,[88][100][101] yet Popper has been the only philosopher of science often praised by scientists.[88] Likened to economists of the 19th century who took circuitous, protracted measures refused falsification of their preconceived principles,[35] the verificationists—that is, the logical positivists—became identified as pillars of scientism[102] holding strict inductivism[103] and foundationalism[82][83] to ground all empirical sciences to a foundation of direct sensory experience.[50] It became fashionable among philosophers to rehash neopositivism's alleged failures before launching argument for their own views,[50] often built atop misrepresentations and outright falsehoods about neopositivism.[50] Not seeking to overhaul emiprical sciences, neopositivists sought to understand them and to overhaul philosophy to be scientific, finding a place among special sciences.[50]

Logical empiricists indeed posed unity of science to network all special sciences and reduce their laws—upong stating boundary conditions and supplying bridge laws within deductivenomological model—to the fundamental science, that is, fundamental physics.[104] And Carnap sought to formalize inductive logic in order to confirm universal laws through probability as "degree of confirmation".[52] Yet Vienna Circle pioneered nonfoundationalism, a legacy especially of Neurath, whose coherentism—the main alternative to foundationalism—likened science to a boat that scientists must rebuild at sea without ever touching shore.[83][105] And neopositivists did not seek rules of inductive logic to regulate scientific discovery or theorizing, but to verify or confirm laws and theories once they are stated.[106] Practicing what Popper preached—conjectures and refutations—logical positivism ran its course and catapulted Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.[76]

Anarchy

In early 1950s, studying philosophy of quantum mechanics under Popper at London School of Economics, Paul Feyerabend found falsificationism not a breakthrough but rather obvious—the controversy over it suggesting philosophy's poverty, rather.[107] And yet, there witnessing attacks on inductivism as "the idea that theories can be derived from, or established on the basis of, facts", Feyerabend was impressed by a talk that Popper gave at British Society for the Philosophy of Science. Popper showed that higher-level laws often conflict with, and cannot be reduced to, supposedly more fundamental laws.[107] The prime example was Kepler's laws of planetary motion, long famed to be—but not actually—reduced by Newton to the law of universal gravitation.[107][108] Having found falsificationism trivial, Feyerabend found the utter sham of inductivism to be pivotal.[107] Investigating, eventually Feyerabend found that among the diverse sciences, the unifying approach is Anything goes—often rhetoric, circular argumentation, even subterfuge—altogether methodological lawlessness, scientific anarchy.[109] At persistence of claims that faith in induction is a necessary condition of reason, Feyerabend sardonically bid Farewell to Reason.[110]

Programmes

Imre Lakatos found Popper's falsificationism not actually practiced by scientists and unrealistically impractical, but found Kuhn's paradigms of science more monopolistic than actual. Lakatos found multiple, vying research programmes to coexist. Each has a hard core of theories shielded from falsification, while a protective belt of malleable theories sustains revisions in order to advance the hard core via theoretical progress that extends the hard core into new empirical territories, whereupon empirical progress corroborates the theoretical claims, how a research programme becomes progressive. Lakatos found inductivism rather farcical, never in history of science ever practiced. Lakatos alleged that Newton had fallaciously posed his own research programme as inductivist to publicly legitimize itself.[111]

Traditions

Lakatos's putatative methodology of scientific research programmes was criticized by sociologists of science and by some philosophers of science, too, as being too idealized and omitting scientific communinities' interplay with the wider society's social configurations and dynamics. Philosopher of science Larry Laudan identified the stable to be not the research programmes, but rather research traditions.

References

Notes

  1. Nola & Sankey, Popper, Kuhn and Feyerabend (Kluwer, 2000), p xi.
  2. Gauch, Scientific Method in Practice (Cambridge U P, 2003), p 81.
  3. Ron Curtis, "Narrative form and normative force: Baconian story-telling in popular science", Social Studies of Science, 1994 Aug;24(3):419–61.
  4. 1 2 3 4 5 Gauch, Scientific Method in Practice (Cambridge U P, 2003), pp 71–72.
  5. 1 2 3 4 Achinstein, Science Rules (JHU P, 2004), pp 127, 130.
  6. 1 2 3 Achinstein, Science Rules (JHU P, 2004), pp 127, 130–32.
  7. A concrete example makes the illogicality more apparent: If it is raining, then this patch of ground will be wet; this patch of ground indeed is wet; therefore it is raining.
  8. Achinstein, Science Rules (JHU P, 2004), pp 127, 130, 132–33.
  9. Sgarbi, Aristotelian Tradition and the Rise of British Empiricism (Springer, 2013), pp 167–68.
  10. 1 2 3 4 Simpson, "Francis Bacon", §k "Induction", in IEP.
  11. 1 2 Mill, A System of Logic (J W Parker, 1843), p 378: "It was, above all, by pointing out the insufficiency of this rude and loose conception of Induction, that Bacon merited the title so generally awarded to him, of the Founder of the Inductive Philosophy. The value of his own contributions to a more philosophical theory of the subject has certainly been exaggerated. Although (along with some fundamental errors) his writings contain, more or less fully developed, several of the most important principles of the Inductive Method, physical investigation has now far outgrown the Baconian model of Induction. Moral and political inquiry, indeed, are as yet far behind that conception. The current and approved modes of reasoning on these subjects are still of the same vicious description against which Bacon protested: the method almost exclusively employed by those professing to treat such matters inductively, is the very inductio per enumerationem simplicem which he condemns; and the experience, which we hear so confidently appealed to by all sects, parties, and interests, is still, in his own emphatic words, mera palpatio.
  12. 1 2 3 4 5 6 McMullin, ch 2 in Lindberg & Westman, eds, Reappraisals of the Scientific Revolution (Cambridge U P, 1990), p 48.
  13. McMullin, ch 2 in Lindberg & Westman, eds, Reappraisals of the Scientific Revolution (Cambridge U P, 1990), p 54.
  14. McMullin, ch 2 in Lindberg & Westman, eds, Reappraisals of the Scientific Revolution (Cambridge U P, 1990), p 52: "Bacon rejects atomism because he believes that the corollary doctrines of the vacuum and the unchangeableness of the atoms are false (II, 8). But he asserts the existence of real imperceptible particles and other occult constituents of bodies (such as 'spirit'), upon which the observed properties of things depend (II, 7). But how are these to be known? He asks us not to be 'alarmed at the subtlety of the investigation', because 'the nearer it approaches to simple natures, the easier and plainer will everything become, the business being transferred from the complicated to the simple...as in the case of the letters of the alphabet and the notes of music' (II, 8). And then, somewhat tantalizingly, he adds: 'Inquiries into nature have the best result when they begin with physics and end with mathematics'. Bacon believes that the investigator can 'reduce the non-sensible to the sensible, that is, make manifest things not directly perceptible by means of others which are' (II, 40)".
  15. Bolotin, Approach to Aristotle's Physics (SUNY P, 1998), p 1.
  16. 1 2 3 Stahl et al, Webs of Reality (Rutgers U P), ch 2 "Newtonian revolution".
  17. Roberto Torretti, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), p 436.
  18. Chhanda Chakraborti, Logic: Informal, Symbolic and Inductive (New Delhi: Prentice-Hall of India, 2007), p 381.
  19. 1 2 Flew, Dictionary (St Martin's, 1984), "Hume", p 156.
  20. 1 2 McWherter, The Problem of Critical Ontology (Palgrave, 2013), p 38: "Since Hume reduces objects of experience to spatiotemporally individuated instances of sensation with no necessary connection to each other (atomistic events), the closest they can come to a causal relation is a regularly repeated succession (constant conjunction), while for Kant the task of transcendental synthesis is to bestow unity and necessary connections upon the atomistic and contingently related contributions of sensibility".
  21. Gattei, Karl Popper's Philosophy of Science (Routledge, 2009), pp 28–29.
  22. Flew, Dictionary (St Martin's, 1984), "Hume", p 154: "Like Kant, Hume sees himself as conducting an anti-Copernican counter-revolution. Through his investigations of the heavens, Copernicus knocked the Earth, and by implication man, from the centre of the Universe. Hume's study of our human nature was to put that at the centre of every map of knowledge".
  23. Schliesser, "Hume's Newtonianism and anti-Newtonianism", § intro, in SEP.
  24. Redman, Rise of Political Economy as a Science (MIT P, 1997), p 183.
  25. 1 2 Flew, Dictionary (St Martin's, 1984), "Hume's fork", p 156.
  26. 1 2 Will Durant, The Story of Philosophy (New York: Pocket Books, 2006), p 457
  27. Fetzer, "Carl Hempel", §2.1 "The analytic/synthetic distinction", in SEP: "Empiricism historically stands in opposition to Rationalism, which is represented most prominently by Immanuel Kant, who argued that the mind, in processing experiences, imposes certain properties on whatever we experience, including what he called Forms of Intuition and Categories of Understanding. The Forms of Intuition impose Euclidean spatial relations and Newtonian temporal relations; the Categories of Understanding require objects to be interpreted as substances, and causes as inherently deterministic. Several developments in the history of science, such as the emergence of the theory of relativity and of quantum mechanics, undermine Kant's position by introducing the role of frames of reference and of probabilistic causation. Newer versions are associated with Noam Chomsky and with Jerry Fodor, who have championed the ideas of an innate syntax and innate semantics, respectively (Chomsky 1957; Fodor 1975; Chomsky 1986)".
  28. 1 2 Wesley C Salmon, "The uniformity of Nature", Philosophy and Phenomenological Research, 1953 Sep;14(1):39–48, p 39.
  29. Avineri, "Hegel and nationalism", Rev Politics, 1962;24:461–84, p 461.
  30. Delanty, Social Science (U Minnesota P, 1997), pp 26, 29.
  31. 1 2 3 4 5 6 7 8 9 Antony Flew, A Dictionary of Philosophy, 2nd edn (New York: St Martin's Press, 1984), "positivism", p 283.
  32. 1 2 3 4 Michel Bourdeau, "Auguste Comte", in Edward N Zalta, ed, The Stanford Encyclopedia of Philosophy, Winter 2014 edn.
  33. Will Durant, The Story of Philosophy (New York: Pocket Books, 2006), p 458
  34. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Delanty, Social Science (U Minnesota P, 1997), pp 26–27.
  35. 1 2 Blaug, Methodology of Economics, 2nd edn (Cambridge U P, 1992), ch 3 "The verificationists, a largely nineteenth-century story", p 51.
  36. Flew, Dictionary (St Martin's, 1984), "Mill's methods", p 232.
  37. George Cooper, The Origin of Financial Crises: Central Banks, Credit Bubbles and the Efficient Market Fallacy (Hampshire GB: Harriman House, 2008), p 64: "Once again, John Maynard Keynes appears in the story. Following World War I, Keynes became a part of the team responsible for imposing the peace settlement on the defeated Germany. Recognising that the proposed reparations demanded of Germany would bankrupt the country, Keynes resigned his position, and wrote The Economic Consequences of the Peace, explaining the problem. Keynes was ignored, the treaty was imposed, and as predicted, Germany was bankrupted. As part of the reparations process, the German government was forced to pay away a large part of the gold reserves that back its currency. They payments, coupled with the government resorting to printing still more currency, produced a spiralling hyperinflation. The resultant economic collapse is today recognised as being a significant element in the subsequent rise of extremism. In a nutshell—World War II was in part born from poor economic and monetary policy as a result of the treaty which ended WWI, and which Keynes opposed".
  38. Lizzie Buchen, "May 29, 1919: A major eclipse, relatively speaking", Wired, 29 May 2009.
    Donald F Moyer, "Revolution in science: The 1919 eclipse test of general relativity", in Studies in the Natural Sciences: On the Path of Einstein (New York: Springer, 1979), Arnold Perlmutter & Linda F Scott, eds, p 55.
    Fulvio Melia, The Black Hole at the Center of Our Galaxy (Princeton: Princeton University Press, 2003), pp 83–87.
  39. Crelinsten, Einstein's Jury (Princeton U P, 2006), p 28.
  40. Grundmann & Stehr, Power of Scientific Knowledge (Cambridge U P, 2012), pp 77–80.
  41. MS Pernick, "Eugenics and public health in American history", American Journal of Public Health, 1997 Nov;87(11):1767–72.
    Andrew Scull, "Book review: The problem of mental deficiency: eugenics, democracy, and social policy in Britain c 1870–1959", Medical History, 1999 Oct;43(4):527–28.
  42. Positive eugenics seeks to stimulate population growth of desired groups, whereas negative eugenics seeks direct curtailment of undesired groups.
  43. Delanty, Social Science (U Minnesota P, 1997), pp 29–30.
  44. 1 2 3 Godfrey-Smith, Theory and Reality: (U Chicago P, 2003), pp 24–25.
  45. Crucial influences were Wittgenstein's philosophy of language in Tractatus Logico-Philosophicus, Russell's logical atomism, and Mach's phenomenalism as well as Machian positivism.
  46. NonEuclidean geometries—that is, geometry on curved surfaces or in "curved space"—were the first major advances in geometry since Euclid in ancient Greece.
  47. In the 1870s, through vast work, Peirce as well as Frege independently resolved deductive inference, which had not been developed since antiquity, as equivalent to mathematical proof. Later, Frege and Russell launched the program logicism to reconstruct mathematics wholly from logic—a reduction of mathematics to logic as the foundation of mathematics—and thereby render irrelevant such idealist or Platonic realist suppositions of independent mathematical truths, abstract objects real and yet nonspatial and nontemporal. Frege abandoned the program, yet Russell continued it with Whitehead before they, too, abandoned it.
  48. In particular, Einstein's general theory of relativity was their paradigmatic model of science, although questions provoked by emergence of quantum mechanics also drew some focus.
  49. According to an envisioned unity of science, within the empirical sciences—but not the formal sciences, which are abstract—there is fundamental science as fundamental physics, whereas all other sciences—including chemistry, biology, astronomy, geology, psychology, economics, sociology, and so on—are the special sciences, in principle derivable from as well as reducible to fundamental science.
  50. 1 2 3 4 5 Friedman, Reconsidering Logical Positivism (Cambridge U P, 1999), pp 2–5.
  51. 1 2 3 4 5 6 Friedman, Reconsidering Logical Positivism (Cambridge U P, 1999), p xii.
  52. 1 2 3 4 5 Murzi, "Rudolf Carnap", IEP.
  53. Concerning reality, the necessary is a state true in all possible worlds—mere logical validity—whereas the contingent hinges on the way the particular world is.

    Concerning knowledge, the a priori is knowable before or without, whereas the a posteriori is knowable only after or through, relevant experience.

    Concerning statements, the analytic is true via terms' arrangement and meanings, thus a tautology—true by logical necessity but uninformative about the world—whereas the synthetic adds reference to a state of facts, a contingency.

    In 1739, Hume cast a fork aggressively dividing "relations of ideas" from "matters of fact and real existence", such that all truths are of one type or the other. Truths by relations among ideas (abstract) all align on one side (analytic, necessary, a priori). Truths by states of actualities (concrete) always align on the other side (synthetic, contingent, a posteriori). At any treatises containing neither, Hume orders, "Commit it then to the flames, for it can contain nothing but sophistry and illusion".

    Flew, Dictionary (St Martin's, 1984), p 156
    Mitchell, Roots (Wadsworth, 2011), pp 249–50.
  54. Fetzer, "Carl Hempel", §2 "The critique of logical positivism", in SEP: "However surprising it may initially seem, contemporary developments in the philosophy of science can only be properly appreciated in relation to the historical background of logical positivism. Hempel himself attained a certain degree of prominence as a critic of this movement. Language, Truth and Logic (1936; 2nd edition, 1946), authored by A J Ayer, offers a lucid exposition of the movement, which was—with certain variations—based upon the analytic/synthetic distinction, the observational/theoretical distinction, and the verifiability criterion of meaningfulness".
  55. Challenges to scientific realism are captured succinctly by Bolotin, Approach to Aristotle's Physics (SUNY P, 1998), p 33–34, commenting about modern science, "But it has not succeeded, of course, in encompassing all phenomena, at least not yet. For it laws are mathematical idealizations, idealizations, moreover, with no immediate basis in experience and with no evident connection to the ultimate causes of the natural world. For instance, Newton's first law of motion (the law of inertia) requires us to imagine a body that is always at rest or else moving aimlessly in a straight line at a constant speed, even though we never see such a body, and even though according to his own theory of universal gravitation, it is impossible that there can be one. This fundamental law, then, which begins with a claim about what would happen in a situation that never exists, carries no conviction except insofar as it helps to predict observable events. Thus, despite the amazing success of Newton's laws in predicting the observed positions of the planets and other bodies, Einstein and Infeld are correct to say, in The Evolution of Physics, that 'we can well imagine another system, based on different assumptions, might work just as well'. Einstein and Infeld go on to assert that 'physical concepts are free creations of the human mind, and are not, however it may seem, uniquely determined by the external world'. To illustrate what they mean by this assertion, they compare the modern scientist to a man trying to understand the mechanism of a closed watch. If he is ingenious, they acknowledge, this man 'may form some picture of a mechanism which would be responsible for all the things he observes'. But they add that he 'may never quite be sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and he cannot even imagine the possibility or the meaning of such a comparison'. In other words, modern science cannot claim, and it will never be able to claim, that is has the definite understanding of any natural phenomenon".
  56. 1 2 Chakravartty, "Scientific realism", §1.2 "The three dimensions of realist commitment", in SEP: "Semantically, realism is committed to a literal interpretation of scientific claims about the world. In common parlance, realists take theoretical statements at 'face value'. According to realism, claims about scientific entities, processes, properties, and relations, whether they be observable or unobservable, should be construed literally as having truth values, whether true or false. This semantic commitment contrasts primarily with those of so-called instrumentalist epistemologies of science, which interpret descriptions of unobservables simply as instruments for the prediction of observable phenomena, or for systematizing observation reports. Traditionally, instrumentalism holds that claims about unobservable things have no literal meaning at all (though the term is often used more liberally in connection with some antirealist positions today). Some antirealists contend that claims involving unobservables should not be interpreted literally, but as elliptical for corresponding claims about observables".
  57. Okasha, Philosophy of Science (Oxford U P, 2002), p 62: "Strictly we should distinguish two sorts of anti-realism. According to the first sort, talk of unobservable entities is not to be understood literally at all. So when a scientist pus forward a theory about electrons, for example, we should not take him to be asserting the existence of entities called 'electrons'. Rather, his talk of electrons is metaphorical. This form of anti-realism was popular in the first half of the 20th century, but few people advocate it today. It was motivated largely by a doctrine in the philosophy of language, according to which it is not possible to make meaningful assertions about things that cannot in principle be observed, a doctrine that few contemporary philosophers accept. The second sort of anti-realism accepts that talk of unobservable entities should be taken at face value: if a theory says that electrons are negatively charged, it is true if electrons do exist and are negatively charged, but false otherwise. But we will never know which, says the anti-realist. So the correct attitude towards the claims that scientists make about unobservable reality is one of total agnosticism. They are either true or false, but we are incapable of finding out which. Most modern anti-realism is of this second sort".
  58. Chakravartty, "Scientific realism", §4 "Antirealism: Foils for scientific realism", §§4.1 "Empiricism", in SEP: "Traditionally, instrumentalists maintain that terms for unobservables, by themselves, have no meaning; construed literally, statements involving them are not even candidates for truth or falsity. The most influential advocates of instrumentalism were the logical empiricists (or logical positivists), including Carnap and Hempel, famously associated with the Vienna Circle group of philosophers and scientists as well as important contributors elsewhere. In order to rationalize the ubiquitous use of terms which might otherwise be taken to refer to unobservables in scientific discourse, they adopted a non-literal semantics according to which these terms acquire meaning by being associated with terms for observables (for example, 'electron' might mean 'white streak in a cloud chamber'), or with demonstrable laboratory procedures (a view called 'operationalism'). Insuperable difficulties with this semantics led ultimately (in large measure) to the demise of logical empiricism and the growth of realism. The contrast here is not merely in semantics and epistemology: a number of logical empiricists also held the neo-Kantian view that ontological questions 'external' to the frameworks for knowledge represented by theories are also meaningless (the choice of a framework is made solely on pragmatic grounds), thereby rejecting the metaphysical dimension of scientific realism|realism (as in Carnap 1950)".
  59. 1 2 3 Hacohen, Karl Popper—The Formative Years (Cambridge U P, 2000), p 279.
  60. Mary Hesse, "Bayesian methods and the initial probabilities of theories", pp 50–105, in Maxwell & Anderson, eds (U Minnesota P, 1975), p 100: "There are two major contending concepts for the task of explicating the simplicity of hypotheses, which may be described respectively as the concepts of content and economy. First, the theory is usually required to have high power or content; to be at once general and specific, and to make precise and detailed claims about the state of the world; that is, in Popper's terminology, to be highly falsifiable. This, as Popper maintains against all probabilistic theories of induction, has the consequence that good theories should be in general improbable, since the more claims a theory makes on the world, other things being equal, the less likely it is to be true. On the other hand, as would be insisted by inductivists, a good theory is one that is more likely than its rivals to be true, and in particular it is frequently assumed that simple theories are preferable because they require fewer premises and fewer concepts, and hence would appear to make fewer claims than more complex rivals about the state of the world, and hence be more probable".
  61. Karl Popper, The Two Fundamental Problems of the Theory of Knowledge (Abingdon & New York: Routledge, 2009), p 20.
  62. Andrews, Keynes and the British Humanist Tradition (Routledge, 2010), pp 63–65.
  63. Russell, Basic Writings (Routledge, 2009), p 159.
  64. 1 2 3 Grover Maxwell, "Induction and empiricism: A Bayesian-frequentist alternative", in pp 106–65, Maxwell & Anderson, eds (U Minnesota P, 1975), pp 111–17.
  65. Wilkinson & Campbell, Philosophy of Religion (Continuum, 2009), p 16.
    Ayer, Language, Truth and Logic, 2nd edn (Gollancz/Dover, 1952), pp 9–10.
  66. 1 2 Hintikka, "Logicism", in Philosophy of Mathematics (North Holland, 2009), pp 283–84.
  67. 1 2 Bechtel, Philosophy of Science (Lawrence Earlbaum, 1988), pp 24–27.
  68. Torretti, Philosophy of Physics (Cambridge U P, 1999), 216.
  69. 1 2 3 4 Torretti, Philosophy of Physics (Cambridge U P, 1999), pp 219–21.
  70. 1 2 3 Torretti, Philosophy of Physics (Cambridge U P, 1999), pp 226, 228–29.
  71. 1 2 Poston, "Foundationalism", §b "Theories of proper inference", §§iii "Liberal inductivism", in IEP: "Strict inductivism is motivated by the thought that we have some kind of inferential knowledge of the world that cannot be accommodated by deductive inference from epistemically basic beliefs. A fairly recent debate has arisen over the merits of strict inductivism. Some philosophers have argued that there are other forms of nondeductive inference that do not fit the model of enumerative induction. C S Peirce describes a form of inference called 'abduction' or 'inference to the best explanation'. This form of inference appeals to explanatory considerations to justify belief. One infers, for example, that two students copied answers from a third because this is the best explanation of the available data—they each make the same mistakes and the two sat in view of the third. Alternatively, in a more theoretical context, one infers that there are very small unobservable particles because this is the best explanation of Brownian motion. Let us call 'liberal inductivism' any view that accepts the legitimacy of a form of inference to the best explanation that is distinct from enumerative induction. For a defense of liberal inductivism, see Gilbert Harman's classic (1965) paper. Harman defends a strong version of liberal inductivism according to which enumerative induction is just a disguised form of inference to the best explanation".
  72. 1 2 Psillos, Phil Q, 1996;46(182):31–47, p 31.
  73. 1 2 Fetzer, "Carl Hempel", §3 "Scientific reasoning", in SEP: "The need to dismantle the verifiability criterion of meaningfulness together with the demise of the observational/theoretical distinction meant that logical positivism no longer represented a rationally defensible position. At least two of its defining tenets had been shown to be without merit. Since most philosophers believed that Quine had shown the analytic/synthetic distinction was also untenable, moreover, many concluded that the enterprise had been a total failure. Among the important benefits of Hempel's critique, however, was the production of more general and flexible criteria of cognitive significance in Hempel (1965b), included in a famous collection of his studies, Aspects of Scientific Explanation (1965d). There he proposed that cognitive significance could not be adequately captured by means of principles of verification or falsification, whose defects were parallel, but instead required a far more subtle and nuanced approach.

    "Hempel suggested multiple criteria for assessing the cognitive significance of different theoretical systems, where significance is not categorical but rather a matter of degree: 'Significant systems range from those whose entire extralogical vocabulary consists of observation terms, through theories whose formulation relies heavily on theoretical constructs, on to systems with hardly any bearing on potential empirical findings' (Hempel 1965b: 117).

    "The criteria Hempel offered for evaluating the 'degrees of significance' of theoretical systems (as conjunctions of hypotheses, definitions, and auxiliary claims) were (a) the clarity and precision with which they are formulated, including explicit connections to observational language; (b) the systematic—explanatory and predictive—power of such a system, in relation to observable phenomena; (c) the formal simplicity of the systems with which a certain degree of systematic power is attained; and (d) the extent to which those systems have been confirmed by experimental evidence (Hempel 1965b). The elegance of Hempel's study laid to rest any lingering aspirations for simple criteria of 'cognitive significance' and signaled the demise of logical positivism as a philosophical movement.

    "Precisely what remained, however, was in doubt. Presumably, anyone who rejected one or more of the three principles defining positivism—the analytic/synthetic distinction, the observational/theoretical distinction, and the verifiability criterion of significance—was not a logical positivist. The precise outlines of its philosophical successor, which would be known as 'logical empiricism', were not entirely evident. Perhaps this study came the closest to defining its intellectual core. Those who accepted Hempel's four criteria and viewed cognitive significance as a matter of degree were members, at least in spirit. But some new problems were beginning to surface with respect to Hempel's covering-law explication of explanation, and old problems remained from his studies of induction, the most remarkable of which was known as 'the paradox of confirmation'".
  74. Misak, Verificationism (Routledge, 1995), p viii.
  75. 1 2 Friedman, Reconsidering Logical Positivism (Cambridge U P, 1999), p 1.
  76. 1 2 3 4 5 Hacohen, Karl Popper: The Formative Years (Cambridge U P, 2000), pp 212–13.
  77. Fetzer, "Carl Hempel", §2.3 "The verifiability criterion of cognitive significance", in SEP: "Hempel (1950, 1951), meanwhile, demonstrated that the verifiability criterion could not be sustained. Since it restricts empirical knowledge to observation sentences and their deductive consequences, scientific theories are reduced to logical constructions from observables. In a series of studies about cognitive significance and empirical testability, he demonstrated that the verifiability criterion implies that existential generalizations are meaningful, but that universal generalizations are not, even though they include general laws, the principal objects of scientific discovery. Hypotheses about relative frequencies in finite sequences are meaningful, but hypotheses concerning limits in infinite sequences are not. The verifiability criterion thus imposed a standard that was too strong to accommodate the characteristic claims of science and was not justifiable.

    "Indeed, on the assumption that a sentence S is meaningful if and only if its negation is meaningful, Hempel demonstrated that the criterion produced consequences that were counterintuitive if not logically inconsistent. The sentence, 'At least one stork is red-legged', for example, is meaningful because it can be verified by observing one red-legged stork; yet its negation, 'It is not the case that even one stork is red-legged', cannot be shown to be true by observing any finite number of red-legged storks and is therefore not meaningful. Assertions about God or The Absolute were meaningless by this criterion, since they are not observation statements or deducible from them. They concern entities that are non-observable. That was a desirable result. But by the same standard, claims that were made by scientific laws and theories were also meaningless.

    "Indeed, scientific theories affirming the existence of gravitational attractions and of electromagnetic fields were thus rendered comparable to beliefs about transcendent entities such as an omnipotent, omniscient, and omni-benevolent God, for example, because no finite sets of observation sentences are sufficient to deduce the existence of entities of those kinds. These considerations suggested that the logical relationship between scientific theories and empirical evidence cannot be exhausted by means of observation sentences and their deductive consequences alone, but needs to include observation sentences and their inductive consequences as well (Hempel 1958). More attention would now be devoted to the notions of testability and of confirmation and disconfirmation as forms of partial verification and partial falsification, where Hempel would recommend an alternative to the standard conception of scientific theories to overcome otherwise intractable problems with the observational/theoretical distinction".
  78. Caldwell, Beyond Positivism (Routledge, 1994), p 47–48.
  79. Poston, "Foundationalism", § intro, in IEP: "The NeurathSchlick debate transformed into a discussion over nature and role of observation sentences within a theory. Quine (1951) extended this debate with his metaphor of the web of belief in which observation sentences are able to confirm or disconfirm a hypothesis only in connection with a larger theory. Sellars (1963) criticizes foundationalism as endorsing a flawed model of the cognitive significance of experience. Following the work of Quine and Sellars, a number of people arose to defend foundationalism (see section below on modest foundationalism). This touched off a burst of activity on foundationalism in the late 1970s to early 1980s. One of the significant developments from this period is the formulation and defense of reformed epistemology, a foundationalist view that took, as the foundations, beliefs such as there is a God (see Plantinga (1983)). While the debate over foundationalism has abated in recent decades, new work has picked up on neglected topics about the architecture of knowledge and justification".
  80. 1 2 3 4 Novick, That Noble Dream (Cambridge U P, 1988), pp 526–27.
  81. 1 2 3 4 5 6 7 Lipton, "Truth about science", Philos Trans R Soc Lond B Biol Sci, 2005;360(1458):1259–69.
  82. 1 2 Friedman, Reconsidering Logical Positivism (Cambridge, 1999), p 2.
  83. 1 2 3 Uebel, "Vienna Circle", §3.3 "Reductionism and foundationalism: Two criticisms partly rebutted", in SEP: "But for a brief lapse around 1929/30, then, the post-Aufbau Carnap fully represents the position of Vienna Circle anti-foundationalism. In this he joined Neurath whose long-standing anti-foundationalism is evident from his famous simile likening scientists to sailors who have to repair their boat without ever being able to pull into dry dock (1932b). Their positions contrasted at least prima facie with that of Schlick (1934) who explicitly defended the idea of foundations in the Circle's protocol-sentence debate. Even Schlick conceded, however, that all scientific statements were fallible ones, so his position on foundationalism was by no means the traditional one. The point of his 'foundations' remained less than wholly clear and different interpretation of it have been put forward. ... While all in the Circle thus recognized as futile the attempt to restore certainty to scientific knowledge claims, not all members embraced positions that rejected foundationalism tout court. Clearly, however, attributing foundationalist ambitions to the Circle as a whole constitutes a total misunderstanding of its internal dynamics and historical development, if it does not bespeak wilfull ignorance. At most, a foundationalist faction around Schlick can be distinguished from the so-called left wing whose members pioneered anti-foundationalism with regard to both the empirical and formal sciences".
  84. Novick, That Noble Dream (Cambridge U P, 1988), p 546.
  85. 1 2 Okasha, Philosophy of Science (Oxford U P, 2002), pp 91–93, esp pp 91–92: "In rebutting the charge that he had portrayed paradigm shifts as non-rational, Kuhn made the famous claim that there is 'no algorithm' for theory choice in science. What does this mean? An algorithm is a set of rules that allows us to compute the answer to a particular question. For example, an algorithm for multiplication is a set of rules that when applied to any two numbers tells us their product. (When you learn arithmetic in primary school, you in effect learn algorithms for addition, subtraction, multiplication, and division.) So an algorithm for theory choice is a set of rules that when applied to two competing theories would tell us which we should choose. Much positivist philosophy of science was in effect committed to the existence of such an algorithm. The positivists often wrote as if, given a set of data and two competing theories, the 'principles of scientific method' could be used to determine which theory was superior. This idea was implicit in their belief that although discovery was a matter of psychology, justification was a matter of logic. Kuhn's insistence that there is no algorithm for theory choice in science is almost certainly correct. Lots of philosophers and scientists have made plausible suggestions about what to look for in theories—simplicity, broadness of scope, close fit to the data, and so on. But these suggestions fall far short of providing a true algorithm, as Kuhn well knew.
  86. 1 2 3 4 5 Karl Popper, ch 4, subch "Science: Conjectures and refutations", in Andrew Bailey, ed, First Philosophy: Fundamental Problems and Readings in Philosophy, 2nd edn (Peterborough Ontario: Broadview Press, 2011), pp 338–42.
  87. Miran Epstein, ch 2 "Introduction to philosophy of science", in Seale, ed, Researching Society and Culture (Sage, 2012). pp 18–19.
  88. 1 2 3 Godfrey-Smith, Theory and Reality (U Chicago P, 2003), p 57–59.
  89. Massimo Pigliucci, ch 1 "The demarcation problem", in Pigliucci & Boudry, eds, Philosophy of Pseudoscience (U Chicago P, 2013), pp 11–12: "Popper's analysis led him to a set of seven conclusions that summarize his take on demarcation (Popper 1957, sec 1):

    1) Theory confirmation is too easy.

    2) The only exception to statement 1 is when confirmation results from risky predictions made by a theory.

    3) Better theories make more 'prohibitions' (i.e., predict things that should not be observed).

    4) Irrefutability of a theory is a vice, not a virtue.

    5) Testability is the same as falsifiability, and it comes in degrees.

    6) Confirming evidence counts only when it is the result of a serious attempt at falsification (that is, it should not be noted, somewhat redundant to statement 2 above).

    7) A falsified theory can be rescued by employing ad hoc hypotheses, but this comes at the cost of a reduced scientific status for the theory in question".
  90. Oswald Hanfling, ch 5 "Logical positivism", in Shanker, ed, Philosophy of Science, Logic and Mathematics (Routledge, 1996), pp 193–94.
  91. Landini, Russell (Routledge, 2011), p 230.
  92. Russell, A History of Western Philosophy (Unwin/Schuster, 1945), pp 673–74: "Hume's skepticism rests entirely upon his rejection of the principle of induction. The principle of induction, as applied to causation, says that, if A has been found very often accompanied or followed by B, then it is probable that on the next occasion on which A is observed, it will be accompanied or followed by B. If the principle is to be adequate, a sufficient number of instances must make the probability not far short of certainty. If this principle, or any other from which it can be deduced, is true, then the casual inferences which Hume rejects are valid, not indeed as giving certainty, but as giving a sufficient probability for practical purposes. If this principle is not true, every attempt to arrive at general scientific laws from particular observations is fallacious, and Hume's skepticism is inescapable for an empiricist. The principle itself cannot, of course, without circularity, be inferred from observed uniformities, since it is required to justify any such inference. It must therefore be, or be deduced from, an independent principle not based on experience. To this extent, Hume has proved that pure empiricism is not a sufficient basis for science. But if this one principle is admitted, everything else can proceed in accordance with the theory that all our knowledge is based on experience. It must be granted that this is a serious departure from pure empiricism, and that those who are not empiricists may ask why, if one departure is allowed, others are forbidden. These, however, are not questions directly raised by Hume's arguments. What these arguments prove—and I do not think the proof can be controverted—is that the induction is an independent logical principle, incapable of being inferred either from experience or from other logical principles, and that without this principle, science is impossible".
  93. 1 2 3 4 Gillies, in Rethinking Popper (Springer, 2009), pp 103–05.
  94. 1 2 Mattessich, Instrumental Reasoning and Systems Methodology (Reidel, 1978), pp 141–42, also available with Springer's "Look inside" feature.
  95. 1 2 Okasha, Philosophy of Science (Oxford U P, 2002), p 23, virtually admonishes Popper: "Most philosophers think it's obvious that science relies heavily on inductive reasoning, indeed so obvious that it hardly needs arguing for. But, remarkably, this was denied by philosopher Karl Popper, whom we met in the last chapter. Popper claimed that scientists only need to use deductive inferences. This would be nice if it were true, for deductive inferences are much safer than inductive ones, as we have seen.

    "Popper's basic argument is this. Although it is not possible to prove that a scientific theory is true from a limited data sample, it is possible to prove that a theory is false. ... So if a scientist is only interested in demonstrating that a given theory is false, she may be able to accomplish her goal without the use of inductive inferences.

    "The weakness of Popper's argument is obvious. For scientists are not only interested in showing that certain theories are false. When a scientist collects experimental data, her aim might be to show that a particular theory—her arch-rival's theory, perhaps—is false. But much more likely, she is trying to convince people that her own theory is true. And in order to do that, she will have to resort to inductive reasoning of some sort. So Popper's attempt to show that science can get by without induction does not succeed".

    And yet immediately beforehand, pp 22–23, Okasha explained that when reporting scientists' work, news media ought to report it correctly as attainment of scientific evidence, not proof: "The central role of induction is science is sometimes obscured by the way we talk. For example, you might read a newspaper report that says that scientists have found 'experimental proof' that genetically modified maize is safe for humans. What this means is that the scientists have tested the maize on a large number of humans, and none of them have come to any harm [that the investigators recognized, measured, and reported]. But strictly speaking, this doesn't prove that maize is safe, in the same sense in which mathematicians can prove Pythagoras' theorem, say. For the inference from the maize didn't harm any of the people on whom it was tested to the maize will not harm anyone is inductive, not deductive. The newspaper report should really have said that scientists have found extremely good evidence that the maize is safe for humans. The word proof should strictly be used only when we are dealing with deductive inferences. In this strict sense of the word, scientific hypotheses can rarely, if ever be proved true by the data".

    Likewise, Popper maintained that properly, nor do scientists try to mislead people to believe that whichever theory, law, principle, so on, is proved either naturally real (ontic truth) or universally successful (epistemic truth).
  96. 1 2 Okasha, Philosophy of Science (Oxford U P, 2002), p 22, summarizes that geneticists "examined a large number of DS sufferers and found that each had an additional chromosome. They then reasoned inductively to the conclusion that all DS sufferers, including the ones they hadn't examined, have an additional chromosome. It is easy to see that this inference is inductive. The fact that the DS sufferers in the sample studied had 47 chromosomes doesn't prove that all DS suffers do. It is possible, though unlikely, that they sample was an unrepresentative one.

    "This example is by no means isolated. In effect, scientists use inductive reasoning whenever they move from limited data to a more general conclusion, which they do all the time. Consider, for example, Newton's principle of universal gravitation, encountered in the last chapter, which says that every body in the universe exerts a gravitational attraction on every other body. Now obviously, Newton did not arrive at this principle by examining every single body in the whole universe—he couldn't possibly have. Rather, he saw that the principle held true for the planets and the Sun, and for objects of various sorts moving near the Earth's surface. From this data, he inferred that the principle held true for all bodies. Again, this inference was obviously an inductive one: the fact that Newton's principle holds true for some bodies doesn't guaranteed that it holds true for all bodies".

    Some pages later, however, Okasha finds enumerative induction insufficient to explain phenomena, a task for which scientists employ IBE, guided by no clear rules, although parsimony, that is, simplicity, is a common heuristic despite no particular assurance that nature is "simple" [pp 29–32]. Okasha then notes the unresolved dispute among philosophers over whether enumerative induction is a consequence of IBE, a view that Okasha, omitting Popper from mention, introduces by noting, "The philosopher Gilbert Harman has argued that IBE is more fundamental" p 32. Yet other philosophers have asserted the converse—that IBE derives from enumerative induction, more fundamental—and, although inference could in principle work both ways, the dispute is unresolved [p 32].
  97. Greenland, "Induction versus Popper", Int J Epidemiol, 1998;27(4):543–8.
  98. 1 2 3 Gillies, in Rethinking Popper (Springer, 2009), p 111: "I argued earlier that there are some exceptions to Popper's claim that rules of inductive inference do not exist. However, these exceptions are relatively rare. They occur, for example, in the machine learning programs of AI. For the vast bulk of human science both past and present, rules of inductive inference do not exist. For such science, Popper's model of conjectures which are freely invented and then tested out seems to me more accurate than any model based on inductive inferences. Admittedly, there is talk nowadays in the context of science carried out by humans of 'inference to the best explanation' or 'abductive inference', but such so-called inferences are not at all inferences based on precisely formulated rules like the deductive rules of inference. Those who talk of 'inference to the best explanation' or 'abductive inference', for example, never formulate any precise rules according to which these so-called inferences take place. In reality, the 'inferences' which they describe in their examples involve conjectures thought up by human ingenuity and creativity, and by no means inferred in any mechanical fashion, or according to precisely specified rules".
  99. Gauch, Scientific Method in Practice (Cambridge, 2003), p 159.
  100. Gauntlett, Creative Explorations (Routledge, 2007), pp 44–46.
  101. Although unfalsifiable was Popper's criterion of simply the unscientific—which adds supposedly empirical evidence of truth in order to become pseudoscientific—it has been commonly misrepresented that unfalsifiable itself was Popper's criterion of pseudoscientific. As example of such misstatement, see Massimo Pigliucci, ch 1 "The demarcation problem", in Pigliucci & Boudry, eds, Philosophy of Pseudoscience (U Chicago P, 2013), pp 9–10.
  102. Stahl et al, Webs of Reality (Rutgers U P, 2002), p 180.
  103. See Gauch, Scientific Method in Practice (Cambridge U P, 2003), p 81, as an example.
  104. Bem & de Jong, Theoretical Issues in Psychology (SAGE, 2006), pp 45–47.
  105. Poston, "Foundationalism", § intro, in IEP: "The debate over foundationalism was reinvigorated in the early part of the twentieth century by the debate over the nature of the scientific method. Otto Neurath (1959; original 1932) argued for a view of scientific knowledge illuminated by the raft metaphor according to which there is no privileged set of statements that serve as the ultimate foundation; rather knowledge arises out of a coherence among the set of statements we accept. In opposition to this raft metaphor, Moritz Schlick (1959; original 1932) argued for a view of scientific knowledge akin to the pyramid image in which knowledge rests on a special class of statements whose verification doesn't depend on other beliefs".
  106. Torretti, Philosophy of Physics (Cambridge U P, 1999), p 221: "Twentieth-century positivists would maintain, of course, that the rules of inductive logic are not meant to preside over the process of discovery, but to control the validity of its findings".
  107. 1 2 3 4 Oberheim, Feyerabend's Philosophy (Walter de Gruyter, 2006), pp 80–82.
  108. According to Oberheim, p 80, Feyerabend later alleged that Popper borrowed this insight from, but failed to credit, Pierre Duhem.
  109. Broad, "Paul Feyerabend", Science, 1979;206:534.
  110. Against Method (1975/1988/1993)
    Science in a Free Society (1978)
    Farewell to Reason (1987).
  111. Larvor, Lakatos (Routledge, 1998), p 49.
  112. First edition was Alfred Jules Ayer, Language, Truth and Logic (London: Victor Gollancz Ltd, 1936 / New York: Oxford University Press, 1936); Archive.org makes available a 1971 print, A J Ayer, Language, Truth and Logic (London etc: Penguin Books, 1971).
This article is issued from Wikipedia - version of the Friday, January 29, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.