Philosophy of science is the study of assumptions, foundations, and implications of science. The field is defined by an interest in one of a set of "traditional" problems or an interest in central or foundational concerns in science. In addition to these central problems for science as a whole, many philosophers of science consider these problems as they apply to particular sciences (e.g. philosophy of biology or philosophy of physics). Some philosophers of science also use contemporary results in science to draw philosophical morals. Although most practitioners are philosophers, several prominent scientists have contributed to the field and still do.
Issues of ethics, such as bioethics and scientific misconduct, are not generally considered part of philosophy of science but may be studied in ethics or science studies.
Karl Popper contended that the central question in the philosophy of science was distinguishing science from non-science.[1] Early attempts by the logical positivists grounded science in observation while non-science (e.g. metaphysics) was non-observational and hence nonsense.[2] Popper claimed that the central feature of science was that science aims at falsifiable claims (i.e. claims that can be proven false, at least in principle).[3] No single unified account of the difference between science and non-science has been widely accepted by philosophers, and some regard the problem as unsolvable or uninteresting.[4]
This problem has taken center stage in the debate regarding evolution and intelligent design. Many opponents of intelligent design claim that it does not meet the criteria of science and should thus not be treated on equal footing as evolution.[5] Those who defend intelligent design either defend the view as meeting the criteria of science or challenge the coherence of this distinction.[6]
Two central questions about science are (1) what are the aims of science and (2) how ought one to interpret the results of science? Scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. Conversely, a scientific antirealist or instrumentalist argues that science does not aim (or at least does not succeed) at truth and that we should not regard scientific theories as true.[7] Some antirealists claim that scientific theories aim at being instrumentally useful and should only be regarded as useful, but not true, descriptions of the world.[8] More radical antirealists, like Thomas Kuhn and Paul Feyerabend, have argued that scientific theories do not even succeed at this goal, and that later, more accurate scientific theories are not "typically approximately true" as Popper contended.[9][10]
Realists often point to the success of recent scientific theories as evidence for the truth (or near truth) of our current theories.[11][12][13][14][15] Antirealists point to either the history of science,[16][17] epistemic morals,[8] the success of false modeling assumptions,[18] or widely termed postmodern criticisms of objectivity as evidence against scientific realisms.[19] Some antirealists attempt to explain the success of our theories without reference to truth[8][20] while others deny that our current scientific theories are successful at all.[9][10]
In addition to providing predictions about future events, we often take scientific theories to offer explanations for those that occur regularly or have already occurred. Philosophers have investigated the criteria by which a scientific theory can be said to have successfully explained a phenomenon, as well as what gives a scientific theory explanatory power. One early and influential theory of scientific explanation was put forward by Carl G. Hempel and Paul Oppenheim in 1948. Their Deductive-Nomological (D-N) model of explanation says that a scientific explanation succeeds by subsuming a phenomenon under a general law.[21] Although ignored for a decade, this view was subjected to substantial criticism, resulting in several widely believed counter examples to the theory.[22]
In addition to their D-N model, Hempel and Oppenheim offered other statistical models of explanation which would account for statistical sciences.[21] These theories have received criticism as well.[22] Salmon attempted to provide an alternative account for some of the problems with Hempel and Oppenheim's model by developing his statistical relevance model.[23][24] In addition to Salmon's model, others have suggested that explanation is primarily motivated by unifying disparate phenomena or primarily motivated by providing the causal or mechanical histories leading up to the phenomenon (or phenomena of that type).[24]
Analysis is the activity of breaking an observation or theory down into simpler concepts in order to understand it. Analysis is as essential to science as it is to all rational enterprises. It would be impossible, for instance, to describe mathematically the motion of a projectile without separating out the force of gravity, angle of projection and initial velocity. Only after this analysis is it possible to formulate a suitable theory of motion.
Reductionism in science can have several different senses. One type of reductionism is the belief that all fields of study are ultimately amenable to scientific explanation. Perhaps a historical event might be explained in sociological and psychological terms, which in turn might be described in terms of human physiology, which in turn might be described in terms of chemistry and physics. The historical event will have been reduced to a physical event. This might be seen as implying that the historical event was 'nothing but' the physical event, denying the existence of emergent phenomena.
Daniel Dennett invented the term greedy reductionism to describe the assumption that such reductionism was possible. He claims that it is just 'bad science', seeking to find explanations which are appealing or eloquent, rather than those that are of use in predicting natural phenomena. He also says that:
Arguments made against greedy reductionism through reference to emergent phenomena rely upon the fact that self-referential systems can be said to contain more information than can be described through individual analysis of their component parts. Examples include systems that contain strange loops, fractal organization and strange attractors in phase space. Analysis of such systems is necessarily information-destructive because the observer must select a sample of the system that can be at best partially representative. Information theory can be used to calculate the magnitude of information loss and is one of the techniques applied by Chaos theory.
The most powerful statements in science are those with the widest applicability. Newton's Third Law — "for every action there is an opposite and equal reaction" — is a powerful statement because it applies to every action, anywhere, and at any time.
But it is not possible for scientists to have tested every incidence of an action, and found a reaction. How is it, then, that they can assert that the Third Law is in some sense true? They have, of course, tested many, many actions, and in each one have been able to find the corresponding reaction. But can we be sure that the next time we test the Third Law, it will be found to hold true?
One solution to this problem is to rely on the notion of induction. Inductive reasoning maintains that if a situation holds in all observed cases, then the situation holds in all cases. So, after completing a series of experiments that support the Third Law, one is justified in maintaining that the Law holds in all cases.
Explaining why induction commonly works has been somewhat problematic. One cannot use deduction, the usual process of moving logically from premise to conclusion, because there is simply no syllogism that will allow such a move. No matter how many times 17th century biologists observed white swans, and in how many different locations, there is no deductive path that can lead them to the conclusion that all swans are white. This is just as well, since, as it turned out, that conclusion would have been wrong. Similarly, it is at least possible that an observation will be made tomorrow that shows an occasion in which an action is not accompanied by a reaction; the same is true of any scientific law.
One answer has been to conceive of a different form of rational argument, one that does not rely on deduction. Deduction allows one to formulate a specific truth from a general truth: all crows are black; this is a crow; therefore this is black. Induction somehow allows one to formulate a general truth from some series of specific observations: this is a crow and it is black; that is a crow and it is black; therefore all crows are black.
The problem of induction is one of considerable debate and importance in the philosophy of science: is induction indeed justified, and if so, how?
Induction attempts to justify scientific statements by reference to other specific scientific statements. It must avoid the problem of the criterion, in which any justification must in turn be justified, resulting in an infinite regress. The regress argument has been used to justify one way out of the infinite regress, foundationalism. Foundationalism claims that there are some basic statements that do not require justification. Both induction and falsification are forms of foundationalism in that they rely on basic statements that derive directly from immediate sensory experience.
The way in which basic statements are derived from observation complicates the problem. Observation is a cognitive act; that is, it relies on our existing understanding, our set of beliefs. An observation of a transit of Venus requires a huge range of auxiliary beliefs, such as those that describe the optics of telescopes, the mechanics of the telescope mount, and an understanding of celestial mechanics. At first sight, the observation does not appear to be 'basic'.
Coherentism offers an alternative by claiming that statements can be justified by their being a part of a coherent system. In the case of science, the system is usually taken to be the complete set of beliefs of an individual scientist or, more broadly, of the community of scientists. W. V. Quine argued for a Coherentist approach to science, as does E O Wilson, though he uses the term consilience (notably in his book of that name). An observation of a transit of Venus is justified by its being coherent with our beliefs about optics, telescope mounts and celestial mechanics. Where this observation is at odds with one of these auxiliary beliefs, an adjustment in the system will be required to remove the contradiction.
“ | William Ockham (c. 1295–1349) … is remembered as an influential nominalist, but his popular fame as a great logician rests chiefly on the maxim known as Ockham's razor: Entia non sunt multiplicanda praeter necessitatem. No doubt this represents correctly the general tendency of his philosophy, but it has not so far been found in any of his writings. His nearest pronouncement seems to be Numquam ponenda est pluralitas sine necessitate, which occurs in his theological work on the Sentences of Peter Lombard (Super Quattuor Libros Sententiarum (ed. Lugd., 1495), i, dist. 27, qu. 2, K). In his Summa Totius Logicae, i. 12, Ockham cites the principle of economy, Frustra fit per plura quod potest fieri per pauciora. (Kneale and Kneale, 1962, p. 243) | ” |
The practice of scientific inquiry typically involves a number of heuristic principles that serve as rules of thumb for guiding the work. Prominent among these are the principles of conceptual economy or theoretical parsimony that are customarily placed under the rubric of Ockham's razor, named after the 14th century Franciscan friar William of Ockham who is credited with giving the maxim many pithy expressions, not all of which have yet been found among his extant works.[25]
The motto is most commonly cited in the form "entities should not be multiplied beyond necessity", generally taken to suggest that the simplest explanation tends to be the correct one. As interpreted in contemporary scientific practice, it advises opting for the simplest theory among a set of competing theories that have a comparable explanatory power, discarding assumptions that do not improve the explanation. The "other things being equal" clause is a critical qualification, which rather severely limits the utility of Ockham's razor in real practice, as theorists rarely if ever find themselves presented with competent theories of exactly equal explanatory adequacy.
Among the many difficulties that arise in trying to apply Ockham's razor is the problem of formalizing and quantifying the "measure of simplicity" that is implied by the task of deciding which of several theories is the simplest. Although various measures of simplicity have been brought forward as potential candidates from time to time, it is generally recognized that there is no such thing as a theory-independent measure of simplicity. In other words, there appear to be as many different measures of simplicity as there are theories themselves, and the task of choosing between measures of simplicity appears to be every bit as problematic as the job of choosing between theories. Moreover, it is extremely difficult to identify the hypotheses or theories that have "comparable explanatory power", though it may be readily possible to rule out some of the extremes. Ockham's razor also does not say that the simplest account is to be preferred regardless of its capacity to explain outliers, exceptions, or other phenomena in question. The principle of falsifiability requires that any exception that can be reliably reproduced should invalidate the simplest theory, and that the next-simplest account which can actually incorporate the exception as part of the theory should then be preferred to the first. As Albert Einstein puts it, "The supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience".
It is vitally important for science that the information about the surrounding world and the objects of study be as accurate and as reliable as possible. For the sake of this, measurements which are the source of this information must be as objective as possible. Before the invention of measuring tools (like weights, meter sticks, clocks, etc) the only source of information available to humans were their senses (vision, hearing, taste, tactile, sense of heat, sense of gravity, etc.). Because human senses differ from person to person (due to wide variations in personal chemistry, deficiencies, inherited flaws, etc) there were no objective measurements before the invention of these tools. The consequence of this was the lack of a rigorous science.
With the advent of exchange of goods, trades, and agricultures there arose a need in such measurements, and science (arithmetics, geometry, mechanics, etc) based on standardized units of measurements (stadia, pounds, seconds, etc) was born. To further abstract from unreliable human senses and make measurements more objective, science uses measuring devices (like spectrometers, voltmeters, interferometers, thermocouples, counters, etc) and lately - computers. In most cases, the less human involvement in the measuring process, the more accurate and reliable scientific data are. Currently most measurements are done by a variety of mechanical and electronic sensors directly linked to computers—which further reduces the chance of human error/contamination of information. This made it possible to achieve astonishing accuracy of modern measurements. For example, current accuracy of measurement of mass is about 10-10, of angles—about 10-9, and of time and length intervals in many cases reaches the order of 10-13 - 10-15. This made possible to measure, say, the distance to the Moon with sub-centimeter accuracy (see Lunar laser ranging experiment), to measure slight movement of tectonic plates using GPS system with sub-millimeter accuracy, or even to measure as slight variations in the distance between two mirrors separated by several kilometers as 10-18 m—three orders of magnitude less than the size of a single atomic nucleus—see LIGO.
A scientific method depends on objective observation in defining the subject under investigation, gaining information about its behavior and in performing experiments. However, most observations are theory-laden – that is, they depend in part on an underlying theory that is used to frame the observations.
Observation involves perception as well as a cognitive process. That is, one does not make an observation passively, but is actively involved in distinguishing the thing being observed from surrounding sensory data. Therefore, observations depend on some underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. More importantly, most scientific observation must be done within a theoretical context in order to be useful. For example, when one observes a measured increase in temperature, that observation is based on assumptions about the nature of temperature and measurement, as well as assumptions about how the thermometer that is used to measure the temperature functions. Such assumptions are necessary in order to obtain scientifically useful observations (such as, "the temperature increased by two degrees"), but they make the observations dependent on these assumptions.
Empirical observation is used to determine the acceptability of some hypothesis within a theory. When someone claims to have made an observation, it is reasonable to ask them to justify their claim. Such a justification must make reference to the theory – operational definitions and hypotheses – in which the observation is embedded. That is, the observation is framed in terms of the theory that also contains the hypothesis it is meant to verify or falsify (though of course the observation should not be based on an assumption of the truth or falsity of the hypothesis being tested). This means that the observation cannot serve as an entirely neutral arbiter between competing hypotheses, but can only arbitrate between the hypotheses within the context of the underlying theory.
Thomas Kuhn denied that it is ever possible to isolate the hypothesis being tested from the influence of the theory in which the observations are grounded. He argued that observations always rely on a specific paradigm, and that it is not possible to evaluate competing paradigms independently. By "paradigm" he meant, essentially, a logically consistent "portrait" of the world, one that involves no logical contradictions and that is consistent with observations that are made from the point of view of this paradigm. More than one such logically consistent construct can paint a usable likeness of the world, but there is no common ground from which to pit two against each other, theory against theory. Neither is a standard by which the other can be judged. Instead, the question is which "portrait" is judged by some set of people to promise the most in terms of scientific “puzzle solving”.
For Kuhn, the choice of paradigm was sustained by, but not ultimately determined by, logical processes. The individual's choice between paradigms involves setting two or more “portraits" against the world and deciding which likeness is most promising. In the case of a general acceptance of one paradigm or another, Kuhn believed that it represented the consensus of the community of scientists. Acceptance or rejection of some paradigm is, he argued, a social process as much as a logical process. Kuhn's position, however, is not one of relativism.[26] According to Kuhn, a paradigm shift will occur when a significant number of observational anomalies in the old paradigm have made the new paradigm more useful. That is, the choice of a new paradigm is based on observations, even though those observations are made against the background of the old paradigm. A new paradigm is chosen because it does a better job of solving scientific problems than the old one.
That observation is embedded in theory does not mean that observations are irrelevant to science. Scientific understanding derives from observation, but the acceptance of scientific statements is dependent on the related theoretical background or paradigm as well as on observation. Coherentism, skepticism, and foundationalism are alternatives for dealing with the difficulty of grounding scientific theories in something more than observations.
According to the Duhem–Quine thesis, after Pierre Duhem and W.V. Quine, any theory can be made compatible with any empirical observation by the addition of suitable ad hoc hypotheses. This is analogous to the way in which an infinite number of curves can be drawn through any finite set of data points on a graph.
This thesis was accepted by Karl Popper, leading him to reject naïve falsification in favor of 'survival of the fittest', or most falsifiable, of scientific theories. In Popper's view, any hypothesis that does not make testable predictions is simply not science. Such a hypothesis may be useful or valuable, but it cannot be said to be science. Confirmation holism, developed by W.V. Quine, states that empirical data are not sufficient to make a judgment between theories. In this view, a theory can always be made to fit with the available empirical data. However, that empirical evidence does not serve to determine between alternative theories does not necessarily imply that all theories are of equal value, as scientists often use guiding principles such as Occam's Razor.
One result of this view is that specialists in the philosophy of science stress the requirement that observations made for the purposes of science be restricted to intersubjective objects. That is, science is restricted to those areas where there is general agreement on the nature of the observations involved. It is comparatively easy to agree on observations of physical phenomena, harder for them to agree on observations of social or mental phenomena, and difficult in the extreme to reach agreement on matters of theology or ethics (and thus the latter remain outside the normal purview of science).
In addition to addressing the general questions regarding science and induction, many philosophers of science are occupied by investigating philosophical or foundational problems in particular sciences. The late 20th and early 21st century has seen a rise in the number of practitioners of philosophy of a particular science.
Philosophy of physics is the study of the fundamental, philosophical questions underlying modern physics, the study of matter and energy and how they interact. The main questions concern the nature of space and time, atoms and atomism. Also the predictions of cosmology, the results of the interpretation of quantum mechanics, the foundations of statistical mechanics, causality, determinism, and the nature of physical laws. Classically, several of these questions were studied as part of metaphysics (for example, those about causality, determinism, and space and time).
Philosophy of biology deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science then began paying increasing attention to developments in biology, from the rise of Neodarwinism in the 1930s and 1940s to the discovery of the structure of Deoxyribonucleic acid (DNA) in 1953 to more recent advances in genetic engineering. Other key ideas such as the reduction of all life processes to biochemical reactions as well as the incorporation of psychology into a broader neuroscience are also addressed.
Philosophy of mathematics is the branch of philosophy that studies the philosophical assumptions, foundations, and implications of mathematics.
Recurrent themes include:
Philosophy of chemistry considers the methodology and underlying assumptions of the science of chemistry. It is explored by philosophers, chemists, and philosopher-chemist teams.
The philosophy of science has centered on physics for the last several centuries, and during the last century in particular, it has become increasingly concerned with the ultimate constituents of existence, or what one might call reductionism. Thus, for example, considerable attention has been devoted to the philosophical implications of special relativity, general relativity, and quantum mechanics. In recent years, however, more attention has been given to both the philosophy of biology and chemistry, which both deal with more intermediate states of existence.
In the philosophy of chemistry, for example, we might ask, given quantum reality at the microcosmic level, and given the enormous distances between electrons and the atomic nucleus, how is it that we are unable to put our hands through walls, as physics might predict? Chemistry provides the answer, and so we then ask what it is that distinguishes chemistry from physics?
In the philosophy of biology, which is closely related to chemistry, we inquire about what distinguishes a living thing from a non-living thing at the most elementary level. Can a living thing be understood in purely mechanistic terms, or is there, as vitalism asserts, always something beyond mere quantum states?
Issues in philosophy of chemistry may not be as deeply conceptually perplexing as the quantum mechanical measurement problem in the philosophy of physics, and may not be as conceptually complex as optimality arguments in evolutionary biology. However interest in the philosophy of chemistry in part stems from the ability of chemistry to connect the “hard sciences” such as physics with the “soft sciences” such as biology, which gives it a rather distinctive role as the central science.
Philosophy of economics is the branch of philosophy which studies philosophical issues relating to economics. It can also be defined as the branch of economics which studies its own foundations and morality.
Philosophy of psychology refers to issues at the theoretical foundations of modern psychology. Some of these issues are epistemological concerns about the methodology of psychological investigation. For example:
Other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind, such as:
Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, evolutionary psychology, and artificial intelligence, questioning what they can and cannot explain in psychology.
Philosophy of psychology is a relatively young field, due to the fact that psychology only became a discipline of its own in the late 1800s. Philosophy of mind, by contrast, has been a well-established discipline since before psychology was a field of study at all. It is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism.
Also, neurophilosophy has become its own field with the works of Paul and Patricia Churchland.
A very broad issue affecting the neutrality of science concerns the areas over which science chooses to explore, so what part of the world and man is studied by science. Since the areas for science to investigate are theoretically infinite, the issue then arises as to what science should attempt to question or find out.
Philip Kitcher in his "Science, Truth, and Democracy"[27] argues that scientific studies that attempt to show one segment of the population as being less intelligent, successful or emotionally backward compared to others have a political feedback effect which further excludes such groups from access to science. Thus such studies undermine the broad consensus required for good science by excluding certain people, and so proving themselves in the end to be unscientific.
See also The Mismeasure of Man and Nazi eugenics.
Paul Feyerabend argued that no description of scientific method could possibly be broad enough to encompass all the approaches and methods used by scientists. Feyerabend objected to prescriptive scientific method on the grounds that any such method would stifle and cramp scientific progress. Feyerabend claimed, "the only principle that does not inhibit progress is: anything goes."[28] However there have been many opponents to his theory. Alan Sokal and Jean Bricmont authored the essay "Feyerabend: Anything Goes" for his belief that science is of little use to society.
In his book The Structure of Scientific Revolutions Kuhn argues that the process of observation and evaluation take place within a paradigm. 'A paradigm is what the members of a community of scientists share, and, conversely, a scientific community consists of men who share a paradigm'.[29] On this account, science can be done only as a part of a community, and is inherently a communal activity.
For Kuhn, the fundamental difference between science and other disciplines is in the way in which the communities function. Others, especially Feyerabend and some post-modernist thinkers, have argued that there is insufficient difference between social practices in science and other disciplines to maintain this distinction. It is apparent that social factors play an important and direct role in scientific method, but that they do not serve to differentiate science from other disciplines. Furthermore, although on this account science is socially constructed, it does not follow that reality is a social construct. (See Science studies and the links there.) Kuhn’s ideas are equally applicable to both realist and anti-realist ontologies.
There are, however, those who maintain that scientific reality is indeed a social construct, to quote Quine:
Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer . . . For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits[30]
See also cultural studies.
A major development in recent decades has been the study of the formation, structure, and evolution of scientific communities by sociologists and anthropologists including Michel Callon, Elihu Gerson, Bruno Latour, John Law, Susan Leigh Star, Anselm Strauss, Lucy Suchman, and others. Some of their work has been previously loosely gathered in actor network theory. Here the approach to the philosophy of science is to study how scientific communities actually operate.
More recently Gibbons and colleagues (1994) have introduced the notion of mode 2 knowledge production.
Researchers in Information science have also made contributions, e.g., the Scientific Community Metaphor.
In the Continental philosophical tradition, science is viewed from a world-historical perspective. One of the first philosophers who supported this view was Georg Wilhelm Friedrich Hegel. Philosophers such as Ernst Mach, Pierre Duhem and Gaston Bachelard also wrote their works with this world-historical approach to science. Nietzsche advanced the thesis in his "The Genealogy of Morals" that the motive for search of truth in sciences is a kind of ascetic ideal.
All of these approaches involve a historical and sociological turn to science, with a special emphasis on lived experience (a kind of Husserlian "life-world"), rather than a progress-based or anti-historical approach as done in the analytic tradition. Two other approaches to science include Edmund Husserl's phenomenology and Martin Heidegger's hermeneutics.
The largest effect on the continental tradition with respect to science was Martin Heidegger's assault on the theoretical attitude in general which of course includes the scientific attitude. For this reason one could suggest that the philosophy of science, in the Continental tradition, has not developed much further due to its inability to overcome Heidegger's criticism.
Notwithstanding, there have been a number of important works: especially a Kuhnian precursor, Alexandre Koyré. Another important development was that of Foucault's analysis of the historical and scientific thought in The Order of Things and his study of power and corruption within the "science" of madness.
Several post-Heideggerian authors contributing to the Continental philosophy of science in the second half of the 20th century include Jürgen Habermas (e.g., "Truth and Justification", 1998), Carl Friedrich von Weizsäcker ("The Unity of Nature", 1980), and Wolfgang Stegmüller ("Probleme und Resultate der Wissenschafttheorie und Analytischen Philosophie", 1973-1986).
In ancient China, science and technology were subordinate to value. The formless and personal Tao of Confucianism was holistically mingled with forms and usefulness. Therefore, its philosophy of science and values like moral and pragmatic considerations was a single entity. Together with its holistic health and pacifism, skepticism of the totalitarianism of Taoism also formed part of its characteristics. Alternatively, other philosophies in China emphasized technical experimentation, such as Mohism.
|
|
Before the 16th century
16th century 17th century 18th century 19th century
|
1900-1930
1930-1960
|
1960-1980
1980-2000
|
|
|