Empiricism

In philosophy, empiricism is a theory of knowledge that asserts that knowledge arises from evidence gathered via sense experience. Empiricism is one of several competing views that predominate in the study of human knowledge, known as epistemology. Empiricism emphasizes the role of experience and evidence, especially sensory perception, in the formation of ideas, over the notion of innate ideas or tradition.[1]

In a related sense, empiricism in the philosophy of science emphasizes those aspects of scientific knowledge that are closely related to evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world, rather than resting solely on a priori reasoning, intuition, or revelation. Hence, science is considered to be methodologically empirical in nature.

Contents

Etymology

The term "empiricism" has a dual etymology. It comes from the Greek word ἐμπειρία, which translates to the Latin experientia, from which we derive the word experience. It also derives from a more specific classical Greek and Roman usage of empiric, referring to a physician whose skill derives from practical experience as opposed to instruction in theory.[2]

Philosophical usage

John Locke, founder of British empiricism

The term "empirical" was originally used to refer to certain ancient Greek practitioners of medicine (Empiric school) who rejected adherence to the dogmatic doctrines of the day (Dogmatic school), preferring instead to rely on the observation of phenomena as perceived in experience.[2] The notion of tabula rasa ("clean slate" or "blank tablet") dates back to Aristotle, and was developed into an elaborate theory by Avicenna[3] and demonstrated as a thought experiment by Ibn Tufail.[4] The doctrine of empiricism was later explicitly formulated by John Locke in the 17th century. He argued that the mind is a tabula rasa (Locke used the words "white paper") on which experiences leave their marks. Such empiricism denies that humans have innate ideas or that anything is knowable without reference to experience.

According to the empiricist view, for any knowledge to be properly inferred or deduced, it is to be gained ultimately from one's sense-based experience.[5] As a historical matter, philosophical empiricism is commonly contrasted with the philosophical school of thought known as "rationalism" which, in very broad terms, asserts that much knowledge is attributable to reason independently of the senses. However, this contrast is today considered to be an extreme oversimplification of the issues involved, because the main continental rationalists (Descartes, Spinoza, and Leibniz) were also advocates of the empirical "scientific method" of their day. Furthermore, Locke, for his part, held that some knowledge (e.g. knowledge of God's existence) could be arrived at through intuition and reasoning alone.

Some important philosophers commonly associated with empiricism include Aristotle, Alhazen, Avicenna, Ibn Tufail, Robert Grosseteste, William of Ockham, Francis Bacon, Thomas Hobbes, John Locke, George Berkeley, David Hume, John Stuart Mill, Gilles Deleuze and Félix Guattari.

Scientific usage

A central concept in science and the scientific method is that all evidence must be empirical, or empirically based, that is, dependent on evidence that is observable by the senses. It is differentiated from the philosophic usage of empiricism by the use of the adjective "empirical" or the adverb "empirically". Empirical is used in conjunction with both the natural and social sciences, and refers to the use of working hypotheses that are testable using observation or experiment. In this sense of the word, scientific statements are subject to and derived from our experiences or observations.

In a second sense "empirical" in science and statistics may be synonymous with "experimental". In this sense, an empirical result is an experimental observation. The term semi-empirical is sometimes used to describe theoretical methods that make use of basic axioms, established scientific laws, and previous experimental results in order to engage in reasoned model building and theoretical inquiry.

History

Early empiricism

Aristotle writes of the unscribed tablet, or tabula rasa, in his treatise Περὶ Ψυχῆς (De Anima or On the Soul).

What the mind thinks must be in it in the same sense as letters are on a tablet (grammateion) which bears no actual writing (grammenon); this is just what happens in the case of the mind. (Aristotle, On the Soul, 3.4.430a1).

Besides some arguments by the Stoics and Peripatetics, the Aristotelian notion of the mind as a blank slate went much unnoticed for more than 1000 years.

A drawing of Ibn Sina (Avicenna) from 1271

During the 11th century, the theory of tabula rasa was developed more clearly by the Persian Islamic philosopher and physician, Ibn Sina (known as "Avicenna" in the Western world). He argued that the "human intellect at birth is rather like a tabula rasa, a pure potentiality that is actualized through education, and that knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" which is developed through a "syllogistic method of reasoning; observations lead to propositional statements, which when compounded lead to further abstract concepts." He further argued that the intellect itself "possesses levels of development from the material intellect (al-‘aql al-hayulani), that potentiality that can acquire knowledge to the active intellect (al-‘aql al-fa‘il), the state of the human intellect in conjunction with the perfect source of knowledge."[3]

During the 12th century, the Andalusian Arab philosopher and novelist Ibn Tufail (known as "Abubacer" or "Ebn Tophail" in the West) demonstrated the theory of tabula rasa as a thought experiment through his Arabic philosophical novel, Hayy ibn Yaqdhan, in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding.[4] A similar Islamic theological novel, Theologus Autodidactus, was written by the Arab theologican and physician Ibn al-Nafis in the 13th century. It also dealt with the theme of empiricism through the story of a feral child on a desert island, but departed from its predecessor by depicting the development of the protagonist's mind through contact with society rather than in isolation from society.[6]

During the 13th century, St. Thomas Aquinas brought the Aristotelian and Avicennian notions to the forefront of Christian thought. These notions sharply contrasted with the previously held Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body here on Earth (see Plato's Phaedo and Apology, as well as others). St. Bonaventure (also 13th century) was one of Aquinas' fiercest intellectual opponents, offering some of the strongest arguments towards the Platonic idea of the mind.

The decidedly anti-Aristotelian and anti-clerical music theorist Vincenzo Galilei, father of Galileo, inventor of monody, made use of the method in successfully solving musical problems, firstly, of tuning such as the relationship of pitch to string tension and mass in stringed instruments, and to volume of air in wind instruments; and secondly to composition, by his various suggestions to composers in his Dialogo della musica antica e moderna (Florence, 1581). The Italian word he used in place of 'experiment' was 'esperienza'. It is known that he was the essential pedagogical influence upon the young Galileo, his eldest son (cf. Coelho, ed. Music and Science in the Age of Galileo Galilei), who, may be considered one of the most influential empiricists in history. Vincenzo, through his tuning research, found the underlying truth at the heart of the misunderstood myth of 'Pythagoras' hammers' (it was the square of the numbers concerned that yielded those musical intervals, not the actual numbers, as believed), and through this and other discoveries that demonstrated fallibility of traditional authorities, a radically empirical attitude developed, passed on to Galileo, which regarded 'experience and demonstration' to be the sine qua non of valid rational enquiry.

In the early 17th century, the Polish alchemist and philosopher Michał Sędziwój, who died four years after John Locke was born, asserted in one of his treatises that "experience is the sole teacher of truth".[7]

British empiricism

Earlier concepts of the existence of "innate ideas" were the subject of debate between the Continental rationalists and the British empiricists in the 17th century through the late 18th century. John Locke, George Berkeley, and David Hume were the primary exponents of empiricism.

Responding to the continental "rationalism" most prominently defended by René Descartes (a philosophical approach that should not be confused with rationalism generally), John Locke (1632–1704), writing in the late 17th century, in his An Essay Concerning Human Understanding (1689), proposed a very influential view wherein the only knowledge humans can have is a posteriori, i.e., based upon experience. Locke is famously attributed with holding the proposition that the human mind is a tabula rasa, a "blank tablet," in Locke's words "white paper," on which the experiences derived from sense impressions as a person's life proceeds are written. There are two sources of our ideas: sensation and reflection. In both cases, a distinction is made between simple and complex ideas. The former are unanalysable, and are broken down into primary and secondary qualities. Complex ideas combine simple ones, and divide into substances, modes, and relations. According to Locke, our knowledge of things is a perception of ideas that are in accordance or discordance with each other, which is very different from the quest for certainty of Descartes.

A generation later, the Irish Anglican bishop, George Berkeley (1685–1753), determined that Locke's view immediately opened a door that would lead to eventual atheism. In response to Locke, he put forth in his Treatise Concerning the Principles of Human Knowledge (1710) a different, very extreme form of empiricism in which things only exist either as a result of their being perceived, or by virtue of the fact that they are an entity doing the perceiving. (For Berkeley, God fills in for humans by doing the perceiving whenever humans are not around to do it). In his text Alciphron, Berkeley maintained that any order humans may see in nature is the language or handwriting of God.[8] Berkeley's approach to empiricism would later come to be called subjective idealism.[9][10]

The Scottish philosopher David Hume (1711–1776) added to the empiricist viewpoint an extreme skepticism that he brought to bear against the accumulated arguments and counterarguments of Descartes, Locke and Berkeley, among others. Hume argued in keeping with the empiricist view that all knowledge derives from sense experience. In particular, he divided all of human knowledge into two categories: relations of ideas and matters of fact (see also Kant's analytic-synthetic distinction). Mathematical and logical propositions (e.g. "that the square of the hypotenuse is equal to the sum of the squares of the two sides") are examples of the first, while propositions involving some contingent observation of the world (e.g. "the sun rises in the East") are examples of the second. All of people's "ideas", in turn, are derived from their "impressions". For Hume, an "impression" corresponds roughly with what we call a sensation. To remember or to imagine such impressions is to have an "idea". Ideas are therefore the faint copies of sensations.[11]

David Hume's empiricism led to numerous philosophical schools

Via his skeptical arguments he maintained that all knowledge, even the most basic beliefs about the natural world, cannot be conclusively established by reason. Rather, he maintained, our beliefs are more a result of accumulated habits, developed in response to accumulated sense experiences. Among his many arguments Hume also added another important slant to the debate about scientific method — that of the problem of induction. Hume argued that it requires inductive reasoning to arrive at the premises for the principle of inductive reasoning, and therefore the justification for inductive reasoning is a circular argument.[11] Among Hume's conclusions regarding the problem of induction is that there is no certainty that the future will resemble the past. Thus, as a simple instance posed by Hume, we cannot know with certainty by inductive reasoning that the sun will continue to rise in the East, but instead come to expect it to do so because it has repeatedly done so in the past.[11]

Hume concluded that such things as belief in an external world and belief in the existence of the self were not rationally justifiable. According to Hume these beliefs were to be accepted nonetheless because of their profound basis in instinct and custom. Hume's lasting legacy, however, was the doubt that his skeptical arguments cast on the legitimacy of inductive reasoning, allowing many skeptics who followed to cast similar doubt.

Phenomenalism

Most of Hume's followers have disagreed with his conclusion that belief in an external world is rationally unjustifiable, contending that Hume's own principles implicitly contained the rational justification for such a belief, that is, beyond being content to let the issue rest on human instinct, custom and habit.[12] According to an extreme empiricist theory known as Phenomenalism, anticipated by the arguments of both Hume and George Berkeley, a physical object is a kind of construction out of our experiences.[13] Phenomenalism is the view that physical objects, properties, events (whatever is physical) are reducible to mental objects, properties, events. Ultimately, only mental objects, properties, events, exist — hence the closely related term subjective idealism. By the phenomenalistic line of thinking, to have a visual experience of a real physical thing is to have an experience of a certain kind of group of experiences. This type of set of experiences possesses a constancy and coherence that is lacking in the set of experiences of which hallucinations, for example, are a part. As John Stuart Mill put it in the mid-19th century, matter is the "permanent possibility of sensation".[14] Mill's empiricism went a significant step beyond Hume in still another respect: in maintaining that induction is necessary for all meaningful knowledge including mathematics. As summarized by D.W. Hamlin:

[Mill] claimed that mathematical truths were merely very highly confirmed generalizations from experience; mathematical inference, generally conceived as deductive [and a priori] in nature, Mill set down as founded on induction. Thus, in Mill's philosophy there was no real place for knowledge based on relations of ideas. In his view logical and mathematical necessity is psychological; we are merely unable to conceive any other possibilities than those that logical and mathematical propositions assert. This is perhaps the most extreme version of empiricism known, but it has not found many defenders.[10]

Mill's empiricism thus held that knowledge of any kind is not from direct experience but an inductive inference from direct experience.[15] The problems other philosophers have had with Mill's position center around the following issues: Firstly, Mill's formulation encounters difficulty when it describes what direct experience is by differentiating only between actual and possible sensations. This misses some key discussion concerning conditions under which such "groups of permanent possibilities of sensation" might exist in the first place. Berkeley put God in that gap; the phenomenalists, including Mill, essentially left the question unanswered. In the end, lacking an acknowledgement of an aspect of "reality" that goes beyond mere "possibilities of sensation", such a position leads to a version of subjective idealism. Questions of how floor beams continue to support a floor while unobserved, how trees continue to grow while unobserved and untouched by human hands, etc., remain unanswered, and perhaps unanswerable in these terms.[10][16] Secondly, Mill's formulation leaves open the unsettling possibility that the "gap-filling entities are purely possibilities and not actualities at all".[16] Thirdly, Mill's position, by calling mathematics merely another species of inductive inference, misapprehends mathematics. It fails to fully consider the structure and method of mathematical science, the products of which are arrived at through an internally consistent deductive set of procedures which do not, either today or at the time Mill wrote, fall under the agreed meaning of induction.[10][16][17]

The phenomenalist phase of post-Humean empiricism ended by the 1940s, for by that time it had become obvious that statements about physical things could not be translated into statements about actual and possible sense data.[18] If a physical object statement is to be translatable into a sense-data statement, the former must be at least deducible from the latter. But it came to be realized that there is no finite set of statements about actual and possible sense-data from which we can deduce even a single physical-object statement. Remember that the translating or paraphrasing statement must be couched in terms of normal observers in normal conditions of observation. There is, however, no finite set of statements that are couched in purely sensory terms and can express the satisfaction of the condition of the presence of a normal observer. According to phenomenalism, to say that a normal observer is present is to make the hypothetical statement that were a doctor to inspect the observer, the observer would appear to the doctor to be normal. But, of course, the doctor himself must be a normal observer. If we are to specify this doctor's normality in sensory terms, we must make reference to a second doctor who, when inspecting the sense organs of the first doctor, would himself have to have the sense data a normal observer has when inspecting the sense organs of a subject who is a normal observer. And if we are to specify in sensory terms that the second doctor is a normal observer, we must refer to a third doctor, and so on (also see the third man).[19][20]

Logical empiricism

Logical empiricism (aka logical positivism or neopositivism) was an early 20th century attempt to synthesize the essential ideas of British empiricism (e.g. a strong emphasis on sensory experience as the basis for knowledge) with certain insights from mathematical logic that had been developed by Gottlob Frege and Ludwig Wittgenstein. Some of the key figures in this movement were Otto Neurath, Moritz Schlick and the rest of the Vienna Circle, along with A.J. Ayer, Rudolf Carnap and Hans Reichenbach. The neopositivists subscribed to a notion of philosophy as the conceptual clarification of the methods, insights and discoveries of the sciences. They saw in the logical symbolism elaborated by Frege (d. 1925) and Bertrand Russell (1872–1970) a powerful instrument that could rationally reconstruct all scientific discourse into an ideal, logically perfect, language that would be free of the ambiguities and deformations of natural language. This gave rise to what they saw as metaphysical pseudoproblems and other conceptual confusions. By combining Frege's thesis that all mathematical truths are logical with the early Wittgenstein's idea that all logical truths are mere linguistic tautologies, they arrived at a twofold classification of all propositions: the analytic (a priori) and the synthetic (a posteriori).[21] On this basis, they formulated a strong principle of demarcation between sentences that have sense and those that do not: the so-called verification principle. Any sentence that is not purely logical, or is unverifiable is devoid of meaning. As a result, most metaphysical, ethical, aesthetic and other traditional philosophical problems came to be considered pseudoproblems.[22]

In the extreme empiricism of the neopositivists—at least before the 1930s—any genuinely synthetic assertion must be reducible to an ultimate assertion (or set of ultimate assertions) that expresses direct observations or perceptions. In later years, Carnap and Neurath abandoned this sort of phenomenalism in favor of a rational reconstruction of knowledge into the language of an objective spatio-temporal physics. That is, instead of translating sentences about physical objects into sense-data, such sentences were to be translated into so-called protocol sentences, for example, "X at location Y and at time T observes such and such."[23] The central theses of logical positivism (verificationism, the analytic-synthetic distinction, reductionism, etc.) came under sharp attack after World War 2 by thinkers such as Nelson Goodman, W.V. Quine, Hilary Putnam, Karl Popper, and Richard Rorty. By the late 1960s, it had become evident to most philosophers that the movement had pretty much run its course, though its influence is still significant among contemporary analytic philosophers such as Michael Dummett and other anti-realists.

Integration of empiricism and rationalism

In the late 19th and early 20th century several forms of pragmatic philosophy arose. The ideas of pragmatism, in its various forms, developed mainly from discussions that took place while Charles Sanders Peirce and William James were both at Harvard in the 1870s. James popularized the term "pragmatism", giving Peirce full credit for its patrimony, but Peirce later demurred from the tangents that the movement was taking, and redubbed what he regarded as the original idea with the name of "pragmaticism". Along with its pragmatic theory of truth, this perspective integrates the basic insights of empirical (experience-based) and rational (concept-based) thinking.

Charles Peirce (1839–1914) was highly influential in laying the groundwork for today's empirical scientific method. Although Peirce severely criticized many elements of Descartes' peculiar brand of rationalism, he did not reject rationalism outright. Indeed, he concurred with the main ideas of rationalism, most importantly the idea that rational concepts can be meaningful and the idea that rational concepts necessarily go beyond the data given by empirical observation. In later years he even emphasized the concept-driven side of the then ongoing debate between strict empiricism and strict rationalism, in part to counterbalance the excesses to which some of his cohorts had taken pragmatism under the "data-driven" strict-empiricist view. Among Peirce's major contributions was to place inductive reasoning and deductive reasoning in a complementary rather than competitive mode, the latter of which had been the primary trend among the educated since David Hume wrote a century before. To this, Peirce added the concept of abductive reasoning. The combined three forms of reasoning serve as a primary conceptual foundation for the empirically based scientific method today. Peirce's approach "presupposes that (1) the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth".[24]

In his Harvard "Lectures on Pragmatism" (1903), Peirce enumerated what he called the "three cotary propositions of pragmatism" (L: cos, cotis whetstone), saying that they "put the edge on the maxim of pragmatism". First among these he listed the peripatetic-thomist observation mentioned above, but he further observed that this link between sensory perception and intellectual conception is a two-way street. That is, it can be taken to say that whatever we find in the intellect is also incipiently in the senses. Hence, if theories are theory-laden then so are the senses, and perception itself can be seen as a species of abductive inference, its difference being that it is beyond control and hence beyond critique — in a word, incorrigible. This in no way conflicts with the fallibility and revisability of scientific concepts, since it is only the immediate percept in its unique individuality or "thisness" — what the Scholastics called its haecceity — that stands beyond control and correction. Scientific concepts, on the other hand, are general in nature, and transient sensations do in another sense find correction within them. This notion of perception as abduction has received periodic revivals in artificial intelligence and cognitive science research, most recently for instance with the work of Irvin Rock on indirect perception.[25][26]

Around the beginning of the 20th century, William James (1842–1910) coined the term "radical empiricism" to describe an offshoot of his form of pragmatism, which he argued could be dealt with separately from his pragmatism - though in fact the two concepts are intertwined in James's published lectures. James maintained that the empirically observed "directly apprehended universe, requires no extraneous trans-empirical connective support",[27] by which he meant to rule out the perception that there can be any value added by seeking supernatural explanations for natural phenomena. James's "radical empricism" is thus not radical in the context of the term "empiricism", but is instead fairly consistent with the modern use of the term "empirical". (His method of argument in arriving at this view, however, still readily encounters debate within philosophy even today.)

John Dewey (1859–1952) modified James' pragmatism to form a theory known as instrumentalism. The role of sense experience in Dewey's theory is crucial, in that he saw experience as unified totality of things through which everything else is interrelated. Dewey's basic thought, in accordance with empiricism was that reality is determined by past experience. Therefore, humans adapt their past experiences of things to perform experiments upon and test the pragmatic values of such experience. The value of such experience is measured by scientific instruments, and the results of such measurements generate ideas that serve as instruments for future experimentation.[28] Thus, ideas in Dewey's system retain their empiricist flavour in that they are only known a posteriori.

See also

  • Empirical formula
  • Empirical knowledge
  • Empirical method
  • Empirical relationship
  • Empirical research
  • Empirical validation
  • History of scientific method

Footnotes

  1. Baird, Forrest E.; Walter Kaufmann (2008). From Plato to Derrida. Upper Saddle River, New Jersey: Pearson Prentice Hall. ISBN 0-13-158591-6. 
  2. 2.0 2.1 Sini, Carlo (2004), "Empirismo", in Gianni Vattimo et al. (eds.), Enciclopedia Garzanti della Filosofia.
  3. 3.0 3.1 Sajjad H. Rizvi (2006), Avicenna/Ibn Sina (CA. 980-1037), Internet Encyclopedia of Philosophy
  4. 4.0 4.1 G. A. Russell (1994), The 'Arabick' Interest of the Natural Philosophers in Seventeenth-Century England, pp. 224-62, Brill Publishers, ISBN 9004094598
  5. Markie, P. (2004), "Rationalism vs. Empiricism" in Edward D. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint.
  6. Dr. Abu Shadi Al-Roubi (1982), "Ibn Al-Nafis as a philosopher", Symposium on Ibn al-Nafis, Second International Conference on Islamic Medicine: Islamic Medical Organization, Kuwait (cf. Ibn al-Nafis As a Philosopher, Encyclopedia of Islamic World)
  7. (Polish) Portalwiedzy.onet.pl, Praktyk i mistyk, Andrzej Datko, Wiedza i życie 2008-04-28
  8. Thornton, Stephen (1987) "Berkeley's Theory of Reality" in The Journal of the Limerick Philosophical Society, UL.ie
  9. Macmillan Encyclopedia of Philosophy (1969), "George Berkeley", vol. 1, p. 297.
  10. 10.0 10.1 10.2 10.3 Macmillan Encyclopedia of Philosophy (1969), "Empiricism", vol. 2, p. 503.
  11. 11.0 11.1 11.2 Hume, D. "An Enquiry Concerning Human Understanding", in Enquiries Concerning the Human Understanding and Concerning the Principles of Morals, 2nd edition, L.A. Selby-Bigge (ed.), Oxford University Press, Oxford, UK, 1902. (Orig. 1748).
  12. Morick, H. (1980), Challenges to Empiricism, Hackett Publishing, Indianapolis, IN.
  13. Marconi, D (2004), "Fenomenismo"', in Gianni Vattimo and Gaetano Chiurazzi (eds.), L'Enciclopedia Garzanti di Filosofia, 3rd edition, Garzanti, Milan, Italy.
  14. Mill, J.S., "An Examination of Sir William Rowan Hamilton's Philosophy", in A.J. Ayer and Ramond Winch (eds.), British Empirical Philosophers, Simon and Schuster, New York, NY, 1968.
  15. Wilson, Fred (2005), "John Stuart Mill", in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy.
  16. 16.0 16.1 16.2 Macmillan Encyclopedia of Philosophy (1969), "Phenomenalism", vol. 6, p. 131.
  17. Macmillan Encyclopedia of Philosophy (1969), "Axiomatic Method", vol. 5, p.188-189, 191ff.
  18. Bolender, John (1998), "Factual Phenomenalism: A Supervenience Theory"', Sorites, no. 9, pp. 16–31.
  19. Berlin, Isaiah (2004), The Refutation of Phenomenalism, Isaiah Berlin Virtual Library.
  20. Chisolm, R. (1948), "The Problem of Empiricism", Journal of Philosophy 45, 512–517.
  21. Achinstein, Peter, and Barker, Stephen F. (1969), The Legacy of Logical Positivism: Studies in the Philosophy of Science, Johns Hopkins University Press, Baltimore, MD.
  22. Barone, Francesco (1986), Il neopositivismo logico, Laterza, Roma Bari.
  23. Rescher, Nicholas (1985), The Heritage of Logical Positivism, University Press of America, Lanham, MD.
  24. Ward, Teddy (n.d.), "Empiricism", Eprint.
  25. Rock, Irvin (1983), The Logic of Perception, MIT Press, Cambridge, MA.
  26. Rock, Irvin, (1997) Indirect Perception, MIT Press, Cambridge, MA.
  27. James, William (1911), The Meaning of Truth.
  28. Dewey, John (1906), Studies in Logical Theory.

References

External links