Talk:Determinism
From Wikipedia, the free encyclopedia
Archives: Talk:Determinism/archive1
[edit] Looking into the future
If determism is right, you should in theory be able to produce a machine which can calculate the future. But then you could be able to see what you were doing in the future. Then what about if you decide not to do it. What happens then? Does it say anything about that in this article? --212.247.27.110 17:40, 30 August 2007 (UTC)
You are basing this on the premiss that hard determinism and free will can co-exist; it is a necessary truth that they cannot. If we accept hard determinism to be true then all human actions and behaviours are governed by an unbroken chain of prior events just like everything else. This means the ability to 'decide', as you put it, does not exist in any real sense. MikeUpNorth 09:46, 19 October 2007 (UTC)
Still an interesting theory but one point: this machine would have to have every single piece of information that exists (not an easy feat by any means (and probably impossible, when you consider energy cannot be created/des... i.e. you would have 2 universes going at once) including the fact that someone built the machine and looked at his/her future so the outcomes would be the same. —Preceding unsigned comment added by 86.3.147.244 (talk) 16:44, 9 March 2008 (UTC)
[edit] Determinism in Eastern tradition
"A vast auditorium is decorated with mirrors and/or prisms hanging on strings of different lengths from an immense number of points on the ceiling. One flash of light is sufficient to light the entire display since light bounces and bends from hanging bauble to hanging bauble." I just wanted to note that when there are no chemical or nuclear reactions, light in >= light out. Sure the light might bounce around a little, but the total light that reflects off the mirrors to the rest of the room must be equal to or lesser than the light that is produced by the flash. (of course the light's progress may be delayed while bouncing between mirrors.) So this passage is an example whos conclusions are contradictory to reality, which throws me off through the rest of the discussion on "Eastern tradition".
It's a metaphor you berk - its a buddhist teaching not a scientific proof. Your idiocy gives me a headache. love mookid
[edit] Ancient Determinism?
This article relates determinism to modern science, and to free will. Although it may have not been called determinism, the idea of it seems to have existed for a long time. The Stoics and many Christians argued that free will is impossible. This is mainly because God should know everything that will happen in the future. In Bondage of the Will, Martin Luther argues that free will is impossible because of a view similar to determinism.
I would also like to note that with or without determinism, we might not have free will. Determinism isn't really the most powerful argument against free will. Our ideas of cause and effect (even if random) still will pose a problem. --dragonlord 08:54, 4 Feb 2005 (UTC)
- That kind of determinism is usually refered to as fatalism, if I'm not mistaken. Completely agree with you on determinism/indeterminism, either way there's no explaining "free will". Randomness is every bit as opposed to free will as order is. Mackanma 22:55, 7 May 2005 (UTC)
-
- Maybe, but not using Wikipedia's definition of fatalism. The Stoics thought their lives were very important to the universe. A person can't make a difference in the sense that whatnever happens is inevitable, but whatever you do helps determine the rest of the world. I don't really see how their determinism is so different from the newer idea other than the fact that it isn't caused by atoms. --dragonlord 04:38, 8 April 2006 (UTC)
- I think there were some ancient Greek philosophies which were determinist. I'll look them up and put something in. I don't think they were very popular before Newtonian physics though. I think compatibilism -v- incompatibilism is the philosophical argument about whether free will and determinism can coexist. WhiteC 00:55, 8 May 2005 (UTC)
There is a related argument, for those who believe in the existence of a soul that is something other than a function of the body. The soul can control the body, but the body cannot control the soul. That's why the soul goes to hell if it wills bad things.
It actually helps to think of these things in terms of a robot and a waldo. (A "waldo" is a servomechanism that is controlled by a human being at the other end of a cable or some other communication channel.) Would it make sense to exact vengeance upon a waldo if the human at the control panel moved his/her remote hand and smashed in the head of a visiting dignitary? Obviously now. The VIP's body guards might disable or destroy the waldo, but if they were being rational it would only be to facilitate their getting at the murderous human.
If it doesn't make sense to exact vengeance on a waldo, how about on a robot that is deterministically programmed to fire its weapons on anybody approaching with a drawn gun? A rational response would be to fire the programmer (and/or accuse him/her of murder or manslaughter), and replace the robot's read-only memory that contains the defective programming. Even if the robot were somehow capable of feeling, it wouldn't make sense to torment it. If the person who programmed the robot was also the person who tortured the robot as a form of "justice," that person would be really sick.
How about a robot that has been given heuristic (artificial intelligence) capabilities and has been programmed with one instruction that states:
- Check whether the gold is being stolen. If the gold is being stolen, try something (the last thing that worked or something new), remember what you've tried, and go back to the beginnning of this loop.
The robot tries all sorts of things and finally ends up hitting robbers with a crowbar. Some robbers die. The police intervene. The robot protests that he was only doing what his wired-in fundamental command programmed him to do. It would make sense to me to hold the robot's programer/designer responsible. But suppose that the robot has "human" feelings. The robot can feel pain if we smash its limbs or even if we confine it and thereby frustrate its programmed need to protect the gold. Would it make sense, would it be just, to punish the robot?
The problem with the robot, at this point, is not that it is being controlled from the outside by somebody tapping in code at a keyboard. The problem may not even be that there is a problem with the original programming contained in a few MEGs of ROM in the robot. In fact, if a different sequence of robbers and/or a different sequence of random attempts to secure protection of the gold has occurred, then the robot might have come up with a better way of protecting the gold than by braining robbers with a 10 pound steel rod. If it's dealt with lot of devious humans trying all sorts of ploys to get at the gold it may be extremely resistant to attempts to get it to give up the successful use of brute force -- because it has learned that when it gets scammed by a human more gold gets lost.
The problem with the robot, from our point of view, is that it is operating under its own set of rules, rules that we do not like. We have several courses open to us: (1) Destroy the robot. (2) Tear the robot down,extract its ROM and reprogram it. Under either of those options the old robot is essentially gone. (3) Punish the robot. Make it feel pain and make us feel like gods. (4) Throw up our hands and let it continue to do its work. Put up "Beware of the Robot!" signs. What are our other options?
Regardless of how it became what it is here and now, the robot interprets things as it has learned to interpret them and feels whatever it feels. In many respects it is not capable of doing anything about what it is and what it feels. To what extent is it capable of dealing with its imminent demise? In other words, to what degree is its future behavior not boxed in by the things that others have done to it? Well, it is "heuristic," and it is programmed to try other things when what it is doing does not work. It should be clear to the robot that arrival of law officers with tanks was not anticipated, and that getting blown up, disassembled, or imprisoned will interfere with it achieving its programmed goals.
We can say to the robot, "You're in a bad fix here, feeling pain because you can't protect the gold because you've gotten yourself in trouble with the police. I can fix all that by pulling the plug. Just let me open your control panel and you'll never feel a thing." Or we could offer virtually the same deal to the robot but promise to wipe its memory, edit its basic code, and reactivate it. But suppose the robot says, "I'm just as sentient as you are. I have the same feelings you do. I don't like being in trouble and suffering pain, but I do like being alive and being myself. I suppose you think it is all right to wipe me out, and if you really think about it you'd probably be equally willing to wipe out another human being. Are you willing to make that a general rule and say that any other human being has the right to wipe you out? And maybe robots (and here he cups a plastisteel hand gently over your shoulder) have the right to wipe humans out."
By one interpretation, "free will" is like the will of the human being at the other end of the tether from a waldo. Nothing that happens to the waldo can go back up the wires and take control of the hands and feet of the human being. Control goes all one way. But the problem of free will just gets moved up a notch. Is the human being the waldo of something else? If not, does the human being have the control over himself/herself that he/she had over the waldo?
By another interpretation, "free will" is never an absolute and one-way kind of control. I either am what I am and act out the essence of my being as best I can in this world, or I get destroyed. If I stay in operation, then the question of "free or determined" has three kinds of answers. (1) I am determined as to what I am because I was constituted in the general way that all things in the world get constituted. Causation was never miraculous and never "totally random." (2) In my actions I am partly determined by internal factors (I lover butterscotch pudding) and partly determined by external factors (the ingredients for butterschotch pudding can't be had for love or money). But (3) is more interesting. What I do in live can change what I am. Nowadays even if I suffer some disease because of my genetic constitution it may be possible to correct that problem. If I was born blind then soon it may be possible to fit me up with a set of artificial eyes. By so doing I increase my range of freedom, and, similarly, if I get careless and let my arm get cut off I decrease my range of freedom. P0M 19:43, 15 May 2005 (UTC)
[edit] Determinism & Quantum Mechanics
I'm no quantum physicist, but it seems to me like the author of this section has made a critical misunderstanding. The objection to determinism in quantum physics has nothing to do with "uncaused events" nor am I aware of it claiming there are such events (like I said though, I'm no quantum physicist). As I understand it, the major objection to determinism arises from the fact that for determinism to work, you have to be able to obtain absolute information about the state of the universe at any given moment. You have be able to know every property of every particle. Quantum physics doesn't allow for this - at best you can know the probability that something will have a particular property, but you cannot be absolutely certain, and it is this uncertainty which prevents determinism working.
Noodhoog 18:15, 8 May 2005 (UTC) (signed after initial posting)
I think you are confusing the knowability of the deterministic state of the universe with its existence.209.172.115.34 00:47, 22 May 2007 (UTC)
- Please sign your postings. Otherwise you will end up at some point arguing with yourself. ;-)
- Somewhere among my pile of partially consumed books I have a concise statement/argument by a presumably reputable physicist that maintains that a careful study of physics indicates that there could not be a certainty of causation at one level that is permanently masked from observers by difficulties in the observational process. If I recall correctly there were entropy problems involved. Even if I have misremembered the argument and its tightness, there is for sure one position out there that says that uncertainty is in the very nature of things, and that probability is inherent in the very fundamental nature of the universe.
- There is another position that says that there really is a certainty of causation, and yet because we do the equivalent of locating crystal wine goblets by swinging a baseball bat we simply cannot know the state of things at any one time well enough to predict with absolute certainty the state of things at any other time.
- The degree to which seemingly "infinitesimal" differences at time one can produce huge differences at some later time has been greatly illuminated by the discovery of the so-called "chaos" theory. It's a pity that they chose that name for the discovery because it creates the false impression that from a determined state at one point you get a chaotic state at another point. Talking about laminar flow turning suddenly into turbulent and "chaotic" flow at some point only heightens that perception.
- The actual case is that if you start with a certain class of equations that are typically used to model things like weather change, and that are used by choosing some starting values, deriving a set of answers, and then feeding those answers back into the same equation, and so on, it turns out that changing a value by a very slight amount at one stage will result in great differences after several more stages in calculation. The famous example is how the flapping of a butterfly's wings or the non-flapping of the same butterfly's wings will produce or fail to produce a storm (all other initial conditions being equal) sometime down the line.
- If you believe that when an electron "hits" a reflective surface there is only one way for it to go (even though we cannot measure its incoming path and velocity without changing what we are measuring), then that electron would only be able to go one place and either trigger or fail to trigger some Rube Goldbergian device that would blow the planet up. But if the electron's motion is in its essence probabilistic then there is no way that the god who fires the electron with exact control of force and direction can tell for sure whether the world ends.
- One system of belief gives us a universe that is causal but ultimately unpredictable, and the other system of belief gives us a universe that is causal and ideally (but not practically) predictable. What would an uncausal universe be like? I guess it would be one in which a firing squad of 100 soldiers with functioning rifles and live ammunition could draw a bead and fire on one man and the man would have some chance of surviving unscratched. In other words it would be a universe in which it would not be possible to realistically assign probabilities to the occurrence or non-occurrence of events.
- An interesting midway point would be a universe in which luck played a part much like what many people used to believe it played, e.g., a world in which whether one gets shot dead by the sheriff is really a 50-50 deal regardless of who the sheriff is, how good a weapon he procures, etc., a world in which baseballs behave like single electrons in a quantum physics experiment. That's a useful thought experiment, I think, because it immediately makes us aware that the real macroscopic world is not much like that, and reminds us that in the real world we can improve rifle designs and improve ammunition designs to the point that over even fairly long differences there is no question but that a bullet will strike a bottle close enough to dead center to destroy it. And that leads us further to wonder whether we can push all the way to the other extreme, the limit case where the bullet will arrive unerringly no matter how small it and its target become and how great may be the distance between them. Quantum physics seems to me to indicate that perfection is impossible. P0M 07:08, 8 May 2005 (UTC)
-
- Thanks for the reminder. I've a tendency to forget to put in my signature :)
- Before I mention anything else, I should say that pretty much all the quantum physics I'll be talking about here is based on the Copenhagen Interpretation, which is the most widely (but not universally) accepted view of QP at present. Obviously, if you base your views on a different model, much of what I have to say here will be incorrect or irrelevant from your POV.
- Without getting drawn into too deep a discussion about this, it seems to me that your planet busting electron device is largely similar in nature to the Schrodinger's Cat experiment, utilising the predictability or unpredictability (as the case may be) of a quantum level event to produce (or not) a large scale event in the "real world", and there have been many articles written on the nature of such experiments
- Your point about the firing squad is particularly interesting as well, because as I understand it, in our universe it IS theoretically possible for a firing squad of 100 to fire at a man, and for all 100 bullets to miss him, or pass straight through him leaving him unscathed. It's just so mindbogglingly unlikely to ever actually happen that you'd need trillions of firing squads shooting trillions of men every second for trillions of times longer than the age of the universe for it to have a decent statistical chance of actually occurring. Lest claims of such an amazingly small chance of ever seeing this event ever appear as a copout, however, let me direct you to the phenomenon of quantum tunnelling which is this very effect in practice, and without which, such devices as electron tunnelling microscopes would not function. As I understand it, quantum tunnelling does not imply in any way a violation of the cause-effect model, as tunnelling simply occurs at a "probability miss" - i.e. when an electron has, say, an 80% chance of going a particular direction, but goes a different direction instead (which would land in the other 20%)
- This is no less caused than if it had gone in the expected direction, it's simply not the one considered most likely. After all, even if something has a 99.99999999% chance of happening, there's still that tiny chance that it won't.
- I think you are perhaps overestimating the randomness introduced by these features rather too much. It's highly unlikely there will ever be a rifle designed to a degree where quantum uncertainty, rather than, say, wind turbulence, machining precision, user skill, or the flight dynamics of the bullet would be the major factor in the accuracy. Furthermore, if it was accurate to the point where quantum uncertainty was the major deciding factor, those factors would be so tiny as to render the gun pinpoint accurate by anybody's standards. Simply put, quantum uncertainty doesn't tend to have any noticable impact on the universe at our scale of things, even taking into account chaos theory and it's butterly effect. What it does mean, as I stated before, is that you can never obtain absolute information about the state of the universe at any given moment due to the Heisenberg uncertainty principle, regardless of how sophisticated and precise your measuring instruments are. Because the ability to theoretically (if not practically) obtain such information is a requirement for determinism (as you need it to extrapolate the future) this breaks the deterministic model. Noodhoog 18:15, 8 May 2005 (UTC)
"If it's 'uncausal' then you'd find cases like xxxxx," is not the same statement as "If you find cases like xxxxx then it's uncausal." As you point out, if 100 men in a firing squad loaded real ammunition from different lots of ammunition by different manufacturers, etc., into fully tested real weapons, and they were all crack marksmen and they all shot at a guy who was secured to a stake and he didn't get hit by a single bullet, then that might be the result of chance. And, as you point out, the probability of it happening (even if they were all such lousy shots that there was only a 50-50 chance in each case that they'd hit their target) would be really low. But since we live in a "causal" universe, if that ever happened the first thing the authorities would likely do would be to hold an inquiry regarding the "conspiracy" to make the execution fail. That is to say, humans ordinarily would not accept uncritically the idea that a sparrow took out the first bullet, the primer was bad in bullet in the second gun, etc., etc. They would apply Occam's razor and figure that the most likely explanation was that deliberate human intervention was involved.
Suppose that there were seven prisoners adhered securely to seven posts along the far end of a football field, and one marksman with a top of the line sniper rifle was at this end of the field. It is his regular job to kill one out of seven prisoners and scare the bleep out of the other six. They are all led in blindfolded so they don't know which stake they are being led to. The marksman has a shooting tripod and he has locked his rifle in position perpendicular to the 0 yard line. So he really doesn't have to aim it anymore. In our universe seven men are brought out, the priest reads a prayer, the marksman pulls the trigger. Except for the rare case in which the gun misfires for some reason (bad ammo or whatever), the guy taped to the center post regularly has his head blown off and the others go back to their cells. In another universe, somebody almost always dies, but it is not always the guy on the center stake. What is predictable is not which man gets it in any given execution, but what number of bullet holes end up in each stake after a thousand executions.
In neither universe would any of these executions be regarded as being uncaused.
What if the heads of individual humans appeared to spontaneously blow up? Would we regard those "executions" as uncaused events? Would we insist that there must be a cause for these events even if we couldn't see what it might be? (There was actually a case like this back in the beginning of jet fighter use. U.S. jet fighters were being shot out of the sky when it appeared that no enemy aircraft, and even no other aircraft of an stripe, were in the vicinity. The cockpits of the recovered aircraft had all been shattered by what appeared to be machinegun fire. It being rather close in time to 1947, flying saucers were suspected for a time. But not everybody believed that the fighter planes and pilots were spontaneously sprouting 50 calibre holes. Finally somebody realized that pilots must be shooting off a round of machinegun fire as part of their exercises and then going into a power dive along the course of their original flight -- and being intercepted by the bullets that they had caught up to. Pilots were advised of the possibility of shooting themselves down, and the mysterious shootings stopped.) If people's heads did start exploding, we would want to know things like whether there was a regular per-month quota, whether there were geographical predictors,etc., etc. It's actually really hard to understand what could be truly regarded as an uncaused event. But to be caused does not necessarily imply that it is to be predictable.
Part of the problem with thinking about these questions is that we unconsciously have a kind of atomistic idea of events. The birth of a baby is "an event". But actually the birth of the baby is part of a continuum of growth that can be traced back through months in the womb and forward through however long the individual continues to live. What we call the cause of an event is merely an earlier portion of an event continuum. A causless event would be an event that began out of nothing in an instant of time and continued on from there. Maybe it happens occasionally that a space-Rok appeares out of the nothingness of interstellar space and goes flying off to gobble space dust or absorb cosmic radiation or whatever space-Roks usually do, but I don't think many people have ever witnesed such an event -- at least at macroscopic levels. If such an event did happen, how would one judge whether it was really uncaused? Surely some people would say that God had indulged himself by making a miracle, and other people would say that the space-Rok boiled up out of some kind of quantum foam. Just because we don't know the reason for some event does not prove that the event in uncaused. On the other hand, discovering that a million or a billion supposedly uncaused events all had causes would not prove that there are no uncaused events.
To bring things back to quantum mechanics, suppose that we have a double-slit experiment going, and we are firing electrons toward the double slits. Experiments show that even if single electrons are fired, these single electrons will "interfere with themselves" in such a way that they will arrive at an array of different but predictable places. Now if we consider "an event" to be arriving at one position or the other, then we could ask whether there is anything happens that determines which point the actual electron arrives at in each instance. It is demonstrable that there is a probabalistic differentiation of impact points. We can put a detector of some sort on the other side of the slits and we'll detect a characteristic interference pattern that gets clearer and clearer as we fire more and more single electrons through. But is there anything else that can be determined as to why in any particular instance the electron arrives here or there? And, to go back to uncaused events for a moment, if we turn off the electron generator and take it away, does the detector light up occasionally because of the spontaneous generation of electrons?
It seems to me that if we imagine that, in addition to some kind of probabilistic distribution function inherent in the nature of electrons that gets manifested in the double slit apparatus, there is also some kind of "demon" with an atomic scale fly swatter or baseball bat who hits the electrons to make them go toward the appropriate target point, then we involve ourself in infinite regress since we need to explain the demon, how the demon operates on the electrons, where the demon gets its probability chart, how it knows how to interpret the probability chart, etc., etc. When we restrict ourselves to what we can determine empirically, electrons hits the slits and fan out in a probabalistic or wave interference-like pattern, and that's it.
If we would be correct to assume that all of our actions are pre-programmed for us as a result of our having been constituted by macroscopic forces that have no probabilistic component, and as a result of outside forces impinging upon us that are likewise devoid of any probabilistic componets, would it not be possible for us to decouple ourselves from the past by making random choices on the basis of physically random events like the decay of radioactive elements? If the geiger counter beeps within the next half second I will move to Europe; otherwise I will stay here. P0M 07:56, 9 May 2005 (UTC)
- There is even a website that will supply you with random numbers based on radioactive decay: http://www.fourmilab.ch/hotbits/ But the notion that quantum uncertainty normally plays no role in macroscopic events is unfounded. The sensitive dependence on intitial conditions that allows a butterfly wing flap to affect (and even effect) future storms can be extended to the energy from the radioactive decay of an individual atom. Or, if you prefer, such a decay could cause a spike in a butterfly's nerve causing its wing to flap. A perhaps more vivid example of the direct influence of QM randomness on human affairs is cancer. Some fraction of cancers are caused by exposure to ionizing radiation. While it is generally believed that more than one mutation is required to make a cell malignant, it certainly happens from time to time that the final mutation is caused by a single radioactive decay product interacting with a DNA molecule at just the right (or wrong for the patient) place and time. Molecules at room temperature bounce around at a rate of several billion times per second. If the time of the decay of the single atom that generated the cancer's initiating particle had been a nanosecond earlier of later, it is not likely that the cell in question would have been successfully transformed. It is my understanding that there is no theoretical basis for saying when an individual atom of a radioactive isotope will decay, beyond the probability implied by the isotope's half-life, which for naturally occuring isotopes is in the hundreds of millions, if not billions of years. That is true even if all the initial conditions of all the constituant particles of the atom were somehow known to very high accuracy at one point of time, a measurement prohibited by QM. The individual atom was itself created (and its initial conditions set) under conditions, whether in a supernova or a nuclear reactor, where QM effects dominate. To say the exact nanosecond and nanometer time and place of that atom's decay has a cause seems totally lacking in scientific basis. Yet the suffering resulting from that precise event is all too macroscopically real. --agr 10:07, 9 May 2005 (UTC)
-
- Actually there has been determination of the statistical likelihood of quantum uncertainty affecting macroscopic events (without deliberate quantum devices to use it like Schrodinger's Cat)... it is incredibly small (though still greater than zero of course), although I can't remember just how small it is at the moment. I'll have to see if I can find a figure somewhere.
-
- Chaos theory and quantum uncertainty are totally unrelated principles. Chaos theory (unlike some versions of quantum theory) is deterministic; it is just difficult to predict chaotic systems in practice, since it is difficult to determine the starting conditions with the necessary precision. So, this is just an analogy, which should not be stretched too much. WhiteC 02:35, 10 May 2005 (UTC)
Unless they are operating in totally separate universes, they are not "totally" unrelated. The "incredibly small" likelihood of quantum uncertainty affecting macroscopic events may also be manifested in "incredibly small" differences in the states of some system, but the interesting thing to me about the way some of these things work out numerically is that the successor states may eventually diverge substantially. I don't find the fatalistic picture of the universe, according to which the future of everything was immutably determined in the beginning, to be persuasive because (a) it would be impossible to prove that there is no "slop" in the system, and (b) quantum considerations seem to me to indicate that there are cases when indeterminancy would take a hand in the way things went on from that point.
Even if the universe has been de facto deterministic and pre-programmed up to this point, what happens if somebody (acting out of his/her predetermined constitutional proclivities in cooperation with the predetermined stimulii that operate on him/her) decides to do or not do something major in the world depending on a geiger counter dice cast?
Which reminds me. An uncle of mine used to determine his route when taking a walk by tossing a coin at intersections. His idea was probably just to avoid being predictable and getting into a rut. But it taught me to look at random departures from my planned route (colloquially known as "getting lost") as an opportunity to discover something that might be very useful for me. P0M 04:06, 10 May 2005 (UTC)
So that something important does not get lost, let me quote from the beginning of this section:
- The objection to determinism in quantum physics has nothing to do with "uncaused events" nor am I aware of it claiming there are such events.
I think that this criticism of the wording of that passage in the article is valid. There is possible, however, that the original writer was trying to get at something that is valid. If you consider a double-slit experiment, nobody doubts that the fact that light hits the screen on the other side of the slits is because there is an arc lamp or a laser or some other light source on this side of the slits. But if we slow things down so that we can emit photons one at a time, then we discover that sometimes a photon will contribute to one "hot spot" on the screen, and sometimes a photon will contribute to a different "hot spot" on the screen. The question can then be asked: "Does something act to cause the photon to go one spot or the other?" P0M 04:25, 10 May 2005 (UTC)
-
- Perhaps 'truly random events' instead of 'uncaused events'? To say that the individual electron's path is uncaused seems to assume that because we haven't found the cause yet there isn't one. There seems to be an epistemology (what can we know about causes) versus ontology (what causes are there, regardless of whether we can find them) problem. WhiteC 11:03, 10 May 2005 (UTC)
I think there is an important philosophy (and philosophy of science) issue here. By "philosophical" I don't mean "inconsequential" or "pertaining to metaphysical sandcastles" either. Going at least as far back as Plato and Aristotle we have tried to understand the functioning of the universe in the only way we could at first -- by comparing it to the way humans perceive themselves to do things. Humans throw rocks and break pots, so if a meteorite hits a house we tend to think of there being a hurler out there. We look for a doer every time something appears to have been done -- even if later on in our history we come to understand that gravitational fields can draw a meteor down out of the sky and through somebody's roof. It is extremely difficult for us to put this hylemorphic idea aside, the image that there is an actor who behaves as would a carpenter to impose a plan or idea (morphe) on some wood (hyle).
Take a look at: [1] (select "picture gallery" and then "room one") and ask yourself how somebody would interpret the moving pictures of electron strikes hitting a screen. If I thought I was seeing a picture of bullet holes being made in a wall, I would assume that a rifleman was changing his/her aim and trying to hit a series of tall vertical targets spread out along that wall. If the lines were actually relatively close together I might think I was seeing the normal kind of pattern that a rifle locked in a shooting tripod will make. But that kind of patterns is circular, and if the rifle and ammunition is any good at all there will be more holes the closer you get to the center of the circle. But I would be puzzled by the bands (even if arrangements were made to produce a circular diffraction pattern). "Why is the gun only striking the center of the bullseye and the black bands, but avoiding the white bands? What is causing this phenomenon?" It would be very difficult for me to accept the idea that that is just the way that bullets go out of a gun. If somebody tried to tell me that then I would perhaps say that we should not, "assume that because we haven't found the cause yet there isn't one." But the "cause" will have to be some kind of "imp," something that is not a part of the very nature of the physical apparatus (the slits and the electrons) that we already know is there. The imp would work according to a plan, and the plan would turn out to be the rules of interference. Then the question becomes: Why can't the electrons "follow the plan" on their own? Why can't we accept the idea that that is just the way that electrons (and anything else that small, I guess) behave?
By the way, I don't particularly like the formulation I have quoted, but we have to be able to say it much better. I don't know whether it can be done without going into a long discussion. One unassailable way would be to find a good Einstein quotation or something like that and then direct people to critiquees of the quoted point of view. P0M 18:38, 10 May 2005 (UTC)
- I don't like the imp part, you are assuming that the cause must be conscious/external because you can't explain it in any other way. Oh, Einstein didn't like quantum indeterminacy, BTW, and most modern quantum physicists disagree with him. WhiteC 01:13, 11 May 2005 (UTC)
- Okay, look, this has got WAY out of hand. I never brought up 'uncaused events', chaos theory, minature demons, firing squads or any of the rest of it. My point is very, very simple.
- 1. The Heisenberg Uncertainty Principle prevents you from being able to have absolute information about the state of the universe at any given moment.
- 2. Without absolute information about the state of the universe at any given moment you cannot extrapolate to the future.
- 3. If you cannot extrapolate to the future, you cannot have determinism.
- Ergo, determinism is broken under the present view of quantum mechanics.
- From this point on I'm staying out of it. If the general consensus is that I have a case, then by all means add what I've said to the page. If not, then don't
- Noodhoog 23:46, 10 May 2005 (UTC)
-
- I agree with points one and two, but not point three. Whether a particular person (or observer) CAN extrapolate the future is irrelevant. The question is whether the future is predetermined regardless of whether any observers can tell what this future is. WhiteC 01:13, 11 May 2005 (UTC)
-
- The whole question has been way out of hand since people started working out the consequences of the original "nonsensical" observation that black bodies absorb heat avidly at all frequencies of light but radiate heat at certain preferred frequencies. Like WhiteC I agree with points 1 and 2 and disagree with 3.
-
-
- Schrödinger observed that one can easily arrange his famous thought experiment resulting in what he called "quite ridiculous cases" with "the ψ-function of the entire system having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts." [[2]]
-
-
- Many physicists have shied away from the apparent implication of the theory that the "cat" in such a case is neither definitively alive or definitively dead until somebody opens the box to observe the state of the radiation detector. Schrodinger was, it appears, objecting to the "sensibleness" of the implication as much as anybody.
-
- What I've called the "imp" actually turns up in a much less anthropomorphic way in the discussion of these physicists, as something called "quantum potential", and as scale increases toward our macro scale the power of this imp becomes proportionally so small as to become imperceptible:
-
-
- The quantum potential formulation of the de Broglie-Bohm theory is still fairly widely used. For example, the theory is presented in this way in the two existing monographs, by Bohm and Hiley and by Holland. And regardless of whether or not we regard the quantum potential as fundamental, it can in fact be quite useful. In order most simply to see that Newtonian mechanics should be expected to emerge from Bohmian mechanics in the classical limit, it is convenient to transform the theory into Bohm's Hamilton-Jacobi form. One then sees that the (size of the) quantum potential provides a measure of the deviation of Bohmian mechanics from its classical approximation. Moreover, the quantum potential can also be used to develop approximation schemes for solutions to Schrödinger's equation (Nerukh and Frederick 2000). [[3]] (near the end of section 5)
-
-
- It seems quite clear to me that the double-slit experiments indicate a very "deterministic" result in the sense that the (sometimes singlely in motion) particles that go through the slits will neither arrive at a single point on the screen nor will they be randomly distributed. They will behave in a "lawlike" way. The question, it seems to me, is whether anybody has ever maintained that there is a reason why each one of a succession of particles may be "targeted" on a different maxima on the screen. If there actually is such a reason then the future is predetermined and the fate of the cat was determined at the dawn of creation. If there is no such reason then the fate of the cat depends on luck.
-
- I guess I would prefer to speak of "probabilistically determined events" rather than "uncaused events." P0M 02:32, 11 May 2005 (UTC)
It's not just that the laws of quantum mechanics do not give any "reason" for the specific, as opposed to probabilistic, behavior of an individual particle going through a double slit, quantum mechanics makes statistical predictions that would be violated if some underlying reason unknown to us existed. There have been a number of experiments to verify those predictions and so far they do not appear to be violated, though many physicists believe better experiments are needed to conclusively settle the question. See Bell test experiments. --agr 12:59, 11 May 2005 (UTC)
- Excellent. Thanks. P0M 15:00, 11 May 2005 (UTC)
I have changed the article to explain the physics of where the uncertainties come from. Philosophy of physics is confusing because philosophers don't know enough physics and physicists "shut up and calculate". I am trying to make the physics very clear and let others talk about what it means. For example, the hidden variable people must not know about all the unknown phases or they wouldn't need to add extra things not to know. Anyone who can read the mathematics can clearly see that the time dependent Schrödinger equation is deterministic in its self, so I think it should be there, even though this is not a mathematical article.
Incidentally, Newton's and his contemporaries' objections to Huygens' wave optics must have been very similar to Einstein's objections to quantum mechanics. Newton believed, correctly it turned out, that light is particles. So if he ever accepted Huygens' wave explanation he would have the qualitative essence of quantum mechanics in the 1700s.
I am reading some Max Plank. He was quite confident that physics is deterministic, but made an argument similar to the one in this article that this does not effect moral responsibility. "But this does not in the least invalidate our own sense of responsibility for our own actions." --David R. Ingham 18:07, 15 September 2005 (UTC)
- I am totally lost. Please rephrase "For example, the hidden variable people must not know about all the unknown phases or they wouldn't need to add extra things not to know." P0M 00:27, 16 September 2005 (UTC)
Please see my new article on Philosophical interpretation of classical physics. I put in a link to it. I am just learning how to make things more understandable and apologize that I can't do better yet. Maybe I should remove some material that is covered in the new article. The equation shows explicitly that quantum mechanics, itself, is deterministic which seems to me key to the whole section. The "qualitative essence of quantum mechanics" is that waves and particles are complimentary and simultaneous properties of all nature and not different types of objects, as in classical physics. Newton believed, like Democratus about atoms, without evidence, that light is composed of particles. This must have prevented him from accepting Hugeness' wave physics because there would then have been some of the same difficulties that Einstein objected to. I should put in a reference to Messiah. He explains how quantum experiments done with macroscopic equipment have probabilistic results. Sorry I don't have a more recent reference. Messiah takes more space in the book for this than most authors do. --David R. Ingham 20:28, 16 September 2005 (UTC)
[edit] "determination of the statistical likelihood of quantum uncertainty affecting macroscopic events"
I can see that I am not done trying to fix the physics here. This is not a properly stated topic. It is based on the very common missunderstanding that "quantum uncertainty", which I call "classical uncertainty" happens in nature. It comes from the difference between our quantum and macroscopic descriptions of nature.
I am not sure yet whether the subject that this heading was intended to identify makes sense, but, at best, it will take a lot of work.
There also seems to be a question of its relevance to the article. David R. Ingham 17:04, 28 September 2005 (UTC)
- Please give volume and page number to the passage(s) in Messiah that support your point. P0M 09:52, 29 September 2005 (UTC)
[edit] Fundamental Problem Of Understanding Determinism
I am not a scientist or a philosopher - however I believe that the reason that people cannot accept that every descision they make/made is/was determined before it happened (i.e. there is no 'freedom of thought') is because humans are trapped in a fixed time of conciousness and therefore cannot free their minds to the concept of this bigger picture - everything is controlled by where it has come from. This attitude is commonplace in western psychology and is a typically self-centered attitude; based on the human fear of lacking control over their own 'destiny'.
I cannot understand how any study of mathematical laws (real world or otherwise) hopes to generate anything other than a sequence of functions and outputs (or seemingly layered processes) that can be 'untangled' to the root/original function(process) and input; and when allowed to 're-tangle' (ceteris paribus - i.e. under the same conditions) result in exactly the same end product.
mookid 15:00, 25 Jan 2006 (UTC)
- I agree w/ your first paragraph. Regarding your 2nd paragraph: If nature can be modeled as a deterministic system, then it is easier to predict and understand. Mathematical laws would be a part of that modeling. If I know the precise length, temperature and thermal properties of a piece of (say) iron, then I could predict its length at other temperatures, for example. WhiteC 18:24, 26 January 2006 (UTC)
-
- Sorry I appear to have been unclear - I was suggesting that anyone of the opinion that the world is conditioned by these (predictable?) mathematical laws (i.e. 'scientists') is forced to agree with the concept of determinism; in my view determinism is the underpinning of all science (action and reaction).. or maybe i'm just a confused little boy?
-
- Edit: Hold on! Are you suggesting that just because "no human(s) has existed that can perfectly express all macroscopic reactions in one 'perfect formula'" means that the formula doesn't exist? I hope not - because if you follow the change from newtonian physics to quantum mechanics (which have 'improved' [in terms of exactness] as "models"), and increment this change over time we will get even closer, and closer, and closer.. etc. The fact that our brains might be too small to actually achieve this 'perfect formula' is irrelevant to its existence and (more importantly) our ability to appreciate its existence. mookid 01:30, 07 Feb 2006 (UTC)
-
-
- Here here!
-
It's a funny thing 'free will'. Determinsim is criticised for not allowing freed will. This is true - but it's not a problem really. Free will is a perception - so long as you think you have it you can survive. It's the 'not knowing' that keeps us going. Assume that you are OK with determinism and you are playing cards (3 card turnover say - like 3 card brag but just turning the cards over to see who wins). You are happy to play, even knowing that the cards have been shuffled and the winner of the next hand is predetermined. You are happy to play because you don't know who is going to win, not because the winner has not been predetermined. The funny thing is that Quantum Mechanics doesn't allow free will ether (although you won't get many QM people highlighting this). QM states that you cannot predict (other than using wavefunctions, probabiloty stuff, etc) the precise location, momentum etc of a particle so you cannot predic the exact output from a reaction (other than general probabilites of course). BUT, regardless of whether the outcome is predictable or not, the question is whether the outcome can be affected or influenced by thought. Pub or Curry? Determinism states you will choose one, and if we rerun history you will choose the same one (becasue the particeles in your brain will react in the same way as they did the first time - this may or may not be practically predicatble, but the result is consistent), QM states the outcome may change when you rerun history - but in BOTH cases the individual has no control over the particle reactions or, in the case of QM, the proabilities (as these are DETERMINED by very accurate formula). Freewill does not exist in either case. PS. Don't get me started on this subject as I can give you lots of theories that counter most i--WBluejohn 19:37, 2 April 2007 (UTC)f not all of QM (which is a mathematical formula used to make better predictions that some have taken as a literal description of a process - go figure). I'll give you the analogy of the actiary if you want - it's a good one. --WBluejohn 19:37, 2 April 2007 (UTC)
[edit] Quantum vandalism
Using a random number based on radioactive decay obtained from http://www.fourmilab.ch/hotbits/, I have selected a word in this talk page and changed its spelling. The random number (in hexadecimal) was 3D70. I used the low order 10 bits to form a line number (as the page appeared in my text editor, bbedit, whch can number lines). The high order digit determined the word in the line to change. --agr 13:45, 11 May 2005 (UTC)
Vandal! --Username132 08:05, 6 February 2006 (UTC)
Comical discussion page to choose to use the word 'random' so freely. That radioactive decay is not random - if you could reverse time and 'undecay' and then 'redecay' under exactly the same conditions (including starting at the same time again) it would have the same value; and is therefore not 'random' at all.
THERE IS AN IMPORTANT DIFFERENCE between something being unpredictable and randomness; the fact that we cannot model something does not mean its random please try and get your head round this - infact someone might want to clarify what randomness actually is in the article.
mookid 02:34, 7 February 2006 (UTC)
It's true that unpredictability and randomness are different, because randomness refers to an objective quantity while something can be unpredictable at the same time as it is causally determined. However, not all physicists agree on whether radioactive decay is random. According to the Quantum mechanics, radioactive decay is undeterminable, but there are different interpretations as to whether it is random. The Copenhagen interpretation argues that it is objectively random, while other theories (most famously, the opinion of Einstein himself) argue against the non-deterministic nature of Quantum mechanics. Some one give me a really, really complicated equation to solve this issue, and you'll have won the argument ;) --Celebere
- The equations you are looking for are in the Radioactive decay article. It was realized quite early that that the uniform rate of decay of a radioactive substance implied that the time when a particular atom decayed was random. This is the view of most physicists, though there are, as always, a few contrary opinions. That was not the point I was trying to make here, however. My objection was to the notion that quantum randomness has no effect on the macroscopic world, a view adopted by some who accept the standard interpretation of QM, but wish to cling to determinism. Finally, I'd be interested in hearing some scientific way (i.e. an experiment) to distinguish between unpredictability and randomness.--00:47, 9 November 2006 (UTC)
-
- Are you seriously suggesting that you can't tell the difference between unpredictability and randomness? Randomness means without cause.. unpredictability implies a lack of human understanding, something unpredictable can be random or determined - infact this is pretty much the crux of the problem of the "free will" vs. determinism debate. mookid
-
-
- "Free will" vs Determisim is a bit of a strange comment. QM doesn't allow for free will either - unless you are suggesting that you can consciously affect the probablity of partcle decays or wavefunctions. It would make a mockery of the 'accurate' formula used to make predictions by QM about particle interactions in your brain if you could consciously decide to change the outcome. --WBluejohn 20:15, 2 April 2007 (UTC)
-
-
-
-
- The usual claim by QM FW proponents is that indeterminism is part of the implementation of the decisions-making process, not that there is a Ghost in the machine. See naturalistic libertarianism, Robert Kane. 1Z 20:34, 2 April 2007 (UTC)
-
-
-
-
-
- Thanks for that. Can't say I agree though - sounds like a fudge to let determinists take all the FW flak.--WBluejohn 20:43, 2 April 2007 (UTC)
-
-
- Yes, in the context we are talking about, the time at which a particular atom decays, I do not understand any scientific difference between unpredictable and random. I don't see any possible experiment to distinguish the two. Perhaps I am missing something.--agr 19:54, 3 December 2006 (UTC)
Unpredictible means the decay is governed by the physical laws. It could be that we don't have enough information to determine when an atom will decay, or that there are for example hidden variables. Random on the other hand means that there is no reason for it to happen - it can or it can't. This is rather counter-intuitive, in that we need to believe there is a cause for it to decay or not decay, or some mechanism which determines which 'random' outcome takes place. The many worlds interpretation of quantum physics is one way of resolving this - i.e. 'both happen'.
As to a scientific experiment to distinuish between the two, it's quite impossible. If you don't know why something happens, you can't say whether it was random or not. If you knew that, it wouldn't be uncertain in the first place and the experiment would be meaningless. There is still, however, a difference between the two. Richard001 05:26, 4 December 2006 (UTC)
[edit] Needed citations
[edit] Quantum uncertainty & macroscopic events
WhiteC said:
- Actually there has been determination of the statistical likelihood of quantum uncertainty affecting macroscopic events (without deliberate quantum devices to use it like Schrodinger's Cat)... it is incredibly small (though still greater than zero of course), although I can't remember just how small it is at the moment. I'll have to see if I can find a figure somewhere.
If we could have this citation it would let us argue persuasively, I think, that the butter would fly. Then, to avoid getting hit by somebody who would object to our thinking and call that doing "original research", it would be nice to find a good citation from some reputable Cal Tech physicist. (-; O.K., I'd settle for MIT or even Stanford ;-) P0M 03:01, 13 May 2005 (UTC)
- The book I originally read it in is at the public library checked out by someone else. But I have it on reserve, and it should be about a week or so. WhiteC 01:50, 15 May 2005 (UTC)
- OK. Here is a long quote, but I don't want to rephrase it and leave out anything. It is from pages 191-2 of Quantum Philosophy: Understanding and Interpreting Contemporary Science by Roland Omnes
-
- It is known that quantum mechanics allows for the existence of "tunnel effects" by which an object suddenly changes its state due to a quantum jump, something that would not be possible through a continuous classical transition. Many examples of such an effect are known in atomic and nuclear physics; it is precisely by a tunnel effect that uranium nuclei spontaneously decay, and two protons at the center of the sun may come close enough to start a nuclear reaction.
-
- Even an object as large as the earth may be subject to a tunnel effect, at least in principle. While the sun's gravitational pull prevents the earth from moving away through a continuous motion, our planet could suddenly find itself rotating around Sirius through a tunnel effect. It would be a terrible blow for determinism. We went to bed the previous night expecting the sun to rise the next morning, only to wake up with a view of an even brighter start, which during the night gives way to unknown constellations.
-
- A theory that permits such events to happen may well make us feel uncomfortable. Fortunately, even if determinism is not absolute, the probability of its violation is extremely small. In the present case, the probability for the earth to move away from the sun is so small that to write it down would require 10 to the power (10 to the power 200) zeros to the right of the decimal point. The smallness of such a number staggers the imagination, and no computer could store it in decimal form. For all practical purposes, it is an event that will never take place.
-
- As we move toward smaller objects, the probability of a tunnel effect increases. The probability for a car in a parking lot to move from one parking stall to another by a tunnel effect is as ridiculously small as that of the earth escaping the from the sun's pull, but it has fewer zeros already. When my car breaks down, I know better than to blame it on quantum mechanics, the proability is still much too small. I rather look for a deterministic cause that a good mechanic will soon identify. However, as we approach the atomic scale the odds increase and quantum nondeterminism eventually overtakes classical determinism. In short, it is all a matter of scale. There is a continuous and quantitative transition of probabilities from extremely small ones to others that first become non-negligible and later prevail.
- Sorry it took so long to dig up, but hopefully it will be of some use. WhiteC 5 July 2005 03:45 (UTC)
I'm really happy to have this substantiation. Perhaps I won't have to flip quantum coins to unlink myself from my destiny. ;-) P0M 5 July 2005 04:00 (UTC)
[edit] Difference between unknowable causes and no causes
Patrick Grey Anderson (at the top of this page) said:
- I understand that people point to the seemingly random nature of quantum mechanics as standing in opposition to determinism, but randomness is still restricted by causality. In order for quantum mechanics to be undetermined, a particles would have to move for no reason whatsoever, as opposed to a shift occurring for no reason that we can *measure*.
Brian Greene, The Fabric of the Cosmos, describes the experiment figured out by Bell that reliably puts to rest the objections of Einstein, Podolsky, and Rosen that Patrick reflects in the quotation given immediately above. Greene describes the experiments and the reasoning behind them in a section that culminates on p. 114 of his recent book. "We are forced to conclude that the assumption made by Einstein, Podolsky, and Rosen, no matter how reasonable it seems, cannot be how our quantum universe works." The experiment is quite elegant, and the error in prediction obtained by following the EPR beliefs is not a tiny fraction, it is 16%. P0M 15:56, 13 May 2005 (UTC)
[edit] Assertion that there is a consistent viewpoint on astrophysics
I don't understand the following assertion:
- Different astrophysicists hold different views about precisely how the universe originated (Cosmogony), but a consistent viewpoint is that scientific determinism has held at the macroscopic level since the universe came into being.
Calling something a "consistent viewpoint" is terribly unclear. Consistent with what? Maybe the writer meant "self-consistent," "internally consistent"? And whose viewpoint is this supposed to be? That of one person? That of the majority of astrophysicists? If a citation were provided it would help tighten up this passage. P0M 16:49, 14 May 2005 (UTC)
- That was my writing, and I agree it is pretty weak (but it used to be even worse ;-) ). I have been trying to find a name for this viewpoint without any success. "Internally consistent" would be better, I suppose. I believe that many scientists (not astrophysicists particularly) hold this viewpoint, but I have no idea how one could possibly find this out, so that makes it pretty weak.
- It seems to me to be an internally consistent form of determinism, if one accepts that quantum indeterminism holds at the scale of very small things. I apologize for phrasing it so poorly. I'd appreciate any suggestions for tightening it up. WhiteC 02:03, 15 May 2005 (UTC)
I didn't mean to bark anybody's shins. How about:
- Various astrophysicists may differ about precisely how the universe originated; they hold different theories of or opinions about Cosmogony. But, as a group, they are in general agreement with the idea that since the very beginning of the universe everything has occurred according to the kind of deterministic interrelation among events consistent with quantum physics. P0M 04:09, 15 May 2005 (UTC)
-
- That looks good. Thanks. WhiteC 5 July 2005 02:42 (UTC)
[edit] Infinite series
One section describes something that sounds like the Kantian antimonies, but without mentioning Kant and without being clear enough that I can figure out exactly what is being asserted. I suspect that the actual argument is something like the following:
Assume: All events have causes, and their causes are all prior events.
The picture this gives us is that Event AN is preceded by AN-1, which is preceded by AN-2, and so forth.
Under that assumption, two possibilities seem clear, and both of them question the validity of the original assumption:
- (1) There is an event A0 prior to which there was no other event that could serve as its cause.
- (2) There is no event A0 prior to which there was no other event, which means that we are presented with an infinite series of causally related events, which is itself an event, and yet there is no cause for this infinite series of events.
P0M 17:34, 14 May 2005 (UTC)
-
- You might make perfect circle - but you would've started somewhere. I think the answer to A0 lies somewhere around big bang theory (or at least "matter and antimatter"?) , and the "+ve -ve" "god devil" idea from the storybook bible. Interestingly if you think about both of your arguments against determinism they lead you to the conclusion that the universe was never created it was just 'always there'. At this point you're arguing with more than just determinists mookid 01:48 07 Feb 2006 (UTC)
[edit] Illusion of free will due to ignorance
The current article has:
- Dynamical-evolutionary psychology, cellular automata and the generative sciences, model emergent processes of social behaviour on this philosophy, showing the experience of free will as essentially a gift of ignorance or as a product of incomplete information.
This treatment seems to reflect a very strong POV, but, lacking citations, I cannot tell whether this POV belongs to one of us or is instead a reflection of the views of unnamed cellular automata et al. ;-) (Something is wrong with the syntax of the original or cellular automata are getting more uppity than I had suspected.)
Donald Davidson, for one, had a very cogent critique of this kind of an analysis, so I suppose the analysis itself must have adherents somewhere. But the writing of our own article on determinism should not speaking of these adherents as "showing" that determinism predicates the meaninglessness of ideas of freedom. P0M 19:01, 14 May 2005 (UTC)
[edit] A good point from what used to be the top of the page
I'm trying to clean out issues remaining near the top of this discussion page and then archive the old stuff. I found one point that seems to me to deserve more discussion:
- I just object to statements like, the entirety of space-time came into existence at some point, unless you define this point embedded in a larger space-time outside of our own. The fallacy comes from implying that both the statements, space-time is everything, and something exists outside of space-time, are true. However, I work in computer software and don't do physics (although this is really about philosophy), so maybe I'm just using the the wrong kind of logic? Nodem
I don't think you are using the wrong kind of logic at all. In fact, this problem has been a serious source of concern at least since the time of St. Augustine. He tried to work out a consistent view of the differences between the characteristics of a creator God and his creation, and one of the things that occured to him was that time may have been created along with space and all the things in the universe. St. Thomas put his tremendous intellect to the task of coming up with an internally consistent philosophy that would yet be in accord with the Bible, and he believed that God is perfect and therefore is not limited. It is his creations that are limited. So God must be infinite -- He can't be bounded or limited space or time or in any other way. When he created the universe he created space and time, so it doesn't make any sense to ask when God decided to create the universe. To do so is to apply the limited concepts appropriate to humans, appropriate to the mundane universe of discourse, to the unlimited. So time has a beginning, and there is a reason for the existence of time, but the reason for time is not something that stands in a temporal sequence.
Physicists came to similar conclusions for much different reasons. When Newton gave human beings his physics, he gave them an enormously successful tool for working on the world. It so precisely predicted the mechanical actions of things that everything seemed to go like clockwork -- better than clockwork, actually. So it was possible to imagine that the affairs of the universe went off like a game played with perfectly elastic and perfectly frictionless spheres on a perfectly flat but bounded table. If the balls were rolling around and bumping into each other then they could continue to do so for an infinite time and that made it possible to imagine that they had been bumping around for an infinite time already. On the grounds of that kind of a picture the only reason to imagine a beginning was a theological reason. If anybody thought about entropy in regard to the universe I guess they just assumed that the effects might catch up with the universe some time so far in the future that it wasn't worth worrying about, and if they did worry about it in context of an infinite prior timeline they perhaps comforted themselves with the idea that there must have been a divine act of creation after all.
Then people discovered that the universe is expanding. To even be able to think about this meant a certain kind of mental preparation such as was provided informally by a little book called Flatland that talked about how a two-dimensional creature would experience life on a flat surface, on a spherical surface, and then on the surface of a sphere that was expanding -- but without causing his own size to expand. In the world of more formal mathematics the same general kind of ideas were developed in non-Euclidian geometries that dared to talk about a higher dimension into which the universe could expand -- so that universe could expand without the rate of expansion being greater "on the edges" than "in the center." (There's a good discussion of these ideas in George Gamow's book, One, Two, Three...Infinity.) And somehow these ideas came together with the observations of astronomers that indicated that the universe is indeed expanding.
Anyway, to cut all this blathering short, people started to wonder what the movie they were making of the expanding universe would look like if they would run it backwards through the projector. The answer seemed to be inevitable. The stars would grow closer and closer together until at some point they would disappear into a single point. And since Einstein had demonstrated that space and time form a continuum, that meant that if space would disappear at that point, then so would time. Another way to say that is that the operational definition of time involves the observation of the movements of things. When there are no longer things, when there is no longer space that things might move in, that means the we've reached the beginning terminus of time. So, in a rather spooky way, physics points to the "creation" of the universe at a single point in time.
Even within our own universe, it is possible that different regions of the universe could be cut off from each other because, with everything moving away from everything in all directions, the speeds with which stars move away from each other are additive over distance. The more distant stars are from us, the more rapidly they are moved away from us by the expansion of the universe. At some point the sum of these speeds exceeds the speed of light and "news" of whatever happens beyond that point will never reach us. It doesn't mean that these distant parts of the universe cease to exist, it must means that they cease to have any possibility of interacting with us. So even though they are "genetically" related to us, they might as well be in entirely separate universes as far as any practical considerations go.
If we trace backward to the Big Bang (assuming we're not making some big mistake somewhere) we lose even the theoretical possibility of catching a glimpse of a time before t=0 in our universe. At the same time we accept the idea on the basis of a lot of experience that nothing is uncaused. Put it in slightly different words: There is always a reason why something happens. It could always be otherwise. The millionth swan that I investigate may turn out to be ultraviolet in color instead of white. But right now we are on a pretty good roll as far as uncaused events go. So we think about whether there could be other universes and other time lines that, like ours, start out at a t=0 and move along in a "line" that has nothing to do with ours, no connection to ours. And we can also wonder what conditions might exist in some universe outside of our own that might initiate the big bang that started our universe.
There's a joke that is getting pretty old by now, but I still smile whenever somebody tells about the country bumpkin in New England who is hailed by a passing car from some sophisticated part of the country. Being lost they ask him how to get to Five Corners or whatever it was, and he replies, "You can't get there from here." Hopefully that is untrue in New England, but maybe it is true in regard to vastly separated regions of our own universe. At least I remember that Prof. Greenstein of Cal Tech was reported to be talking about ideas like that in the early 60s, and I doubt that he was their sole author. Perhaps it is also possible that there are other universes that were created in line with the same kind of reasons that lie behind our universe.
There are even people who are now talking about how (theoretically, I suppose) one might recreate the conditions that prevailed just before the Big Bang and therefore produce a new universe that would go off on its own space-time continuum. Being totally cut off from our universe any sentient beings that came into existence in that universe would be forever ignorant of our having touched off their Big Bang.
The original statement was:
- the entirety of space-time came into existence at some point
That statement does indeed play tricks with our minds because it unwittingly throws us into the wrong way of visualizing things. We should be led to visualize something like an ant crawling down a ribbon that is gradually unfurling from a balloon at a rate slower than the balloon is rising into the sky. At some point if the ant keeps walking he is going to come to the end of the ribbon. It's not that he simply comes to a point where a sign says "Go no further!" and the color of the ribbon changes or something like that. There simply is no more ribbon.
How about something like:
- "If we could retrace our steps back through time, we would come to a terminus. We would have run out of space and time at the point at which space ceases to have a volume and any "clock" that existed would have no leeway for its pendulum to move.
I need to come back and edit this later. Somebody remind me. KSchutte 4 July 2005 22:21 (UTC)
[edit] quantum reality
When Dr. Broida explained that the probabilities are due to the interaction of the quantum and classical descriptions, rather than being internal to quantum mechanics, everyone in the statistical mechanics class was satisfied. This was because we were already accepting the quantum description of matter as reality. The fact that probabilities arise when one uses a simplified approximate description (Classical Mechanics) is not disturbing. Being so easily satisfied, physicists don't talk much about the philosophy, and so lay people remain mystified. David R. Ingham
- I think even lay physicists remain mystified. This page should probably be combined and conjoined in some way with the various pages on Deterministic (disambiguation). There's a lot of confusion on many of these pages. The biggest problem is the quantum stuff, where we have different people saying different things about its determinacy. The fact is, some models (such as the Copenhagen thesis) are indeterministic, while others (such as the Bohm interpretation) are deterministic. KSchutte 6 July 2005 16:56 (UTC)
-
- Note: Interpretation of quantum mechanics gives a nice table of which interpretations are deterministic, and which are not. KSchutte 6 July 2005 17:04 (UTC)
[edit] New material
Here is a very good source of what is the current understanding of experts:
Physics Today, April 2006, "Weinberg replies", Steven Weinberg, p. 16, "... but the apparatus that we use to measure these variables—and we ourselves—are described by a wave function that evolves deterministically. So there is a missing element in quantum mechanics: a demonstration that the deterministic evolution of the wave function of the apparatus and observer leads to the usual probabilistic rules."
In principle the answer is given by the correspondence principle, but the details are complicated, so there is not a clear derivation of "the usual probabilistic rules".
This supports the statement that the determinism evident in the time dependent Schrödinger equation carries on to more complicated cases. Less strongly, it supports the view that "reality" is quantum. David R. Ingham 14:43, 10 May 2006 (UTC)
[edit] Specific formulations and references
Currently, I think this article is missing some important philosophical arguments against determinism, which it mentions, but only in passing. I think a good way to get more references into this article would be to add and expand transliterations of the influential arguments by philosophers on this issue, which frequently stand as archetypes of the major positions anyway. For example, I would like to add more about about the philosophy of Thomas Hobbes, Immanuel Kant, and David Hume. Hume in particular made an argument that is potentially devastating to determinism and I believe it deserves more time here. --malathion talk 22:29, 17 July 2005 (UTC)
-
- What did Hume ever say that was devastating to hard determinism? In "An Enquiry Concerning Human Understanding" it is pretty clear that when Hume speaks of determinism he is only speaking of the ability to act upon one's desires. Whether those desires themselves are determined is left uncertain. Specifically, he writes: "By liberty, then, we can only mean a power of acting or not acting, according to the determinations of the will; tht is, if we choose to remain at rest, we may; if we choose to move, we also may." Later, when discussing the problem of an all-powerful God and human moral responsibility, he writes: "To reconcile the indifference and contingency of human actions with prescience; or to defend absolute decrees, and yet free the Deity from being the author of sin, has been found hitherto to exceed all the power of philosophy." He does point out that you can never prove causal determinism, but that's about it. Patrick Grey Anderson
[edit] Marx
I find no mention of Karl Marx in this page. I wonder whether the connection between 'determinism' and Marx's historical materialism is strong enough to merit a mention. At this point, I merely added 'Historical Metrialism' to the 'See Also' list, but I think more should be done if others agree there is such a connection because, IMHO, Marx has probably done as much or more to advance deterministic thinking (especially in the lay population) as any single thinker in recent history. Stancollins 17:43, 22 September 2005 (UTC)
Marx´s historical materialism is deterministic, but so are millions of other things also. ----Martin
[edit] Determinism and QM
I would like to question the inclusion of the statement:
- "The well known experimental physicist Dr. Herbert P. Broida [1] (1920-1978) taught his statistical mechanics class at The University of California at Santa Barbara that the probabilities arise in the transition from quantum to classical descriptions, rather than within quantum mechanics, as sometimes supposed."
This seems like unsourced oral history, and it goes back three decades. If this position has any support in modern physics, it should be possible to find a more recent published reference.
I also think the statement that the Schrödinger equation is deterministic needs additional clarification. The only way this is relevant to determinism is if one considers a wave function for the entire universe, starting from its inception. Putting aside the impossibility of ever computing such a thing, it would carry the probability density of not just the universe we know but all possible variations. In particular, the probability it would give of the Earth existing as it is now would be infinitesimally small. We would all be Schrödinger cats in such a formulation. --agr 16:04, 4 November 2005 (UTC)
The article that was written in support of that general line of argument has received a "frd" (request for deletion), and a decision on that matter is pending. Those who advocate deleting the article raise many of the same concerns you raise. I was unable to get clear on what the author was trying to say, and have also been unable to get him to supply any specific citations. The ideas are supposed to be in Albert Messiah's Quantum Mechanics, but it is a two volume text. It is very nicely written and fairly well indexed. I can find nothing to support the contentions mentioned above therein. I would support you if you want to go ahead and delete that material. It can always be reinserted if the author can provide current citations.
If you have expertise, I would appreciate your looking at [[4]] to see whether it has any merit. I have been asking for clarifications since that article appeared, but to no avail. I have been unwilling to see it deleted because the author has qualifications and publications that suggest he may know what he is talking about, and I would not like to see something of value be lost because of difficulty in expressing the ideas in English.
As for computing the wave function for the entire universe, the answer is 40. ;-) P0M 03:14, 5 November 2005 (UTC)
- My written sources say it is 42 :) I did look at [[5]] and I made a comment on one of your entries on the talk page. You had described each particle in a QM system as having its own wave function, with the wave functions interacting from time to time. QM only allows one wave function, in general, that incorporates all the particles in the system and generates probability densities for all possible outcomes. That, I think, is the fallacy in the notion that, since the Schrödinger equation is deterministic, the universe must be. The only wave function that is meaningful to determinism is the aforementioned wave function for the entire universe, integrated forward from the beginning of time. That wave function, if one attempts to take it seriously, includes all possible universes in complete detail. That incomprehensible fuzz is not the universe we know.
- Part of the problem may be that classical physics made claims to be able to predict the future evolution of systems of particles once their initial conditions were known. QM is more modest, only seeking to predict the outcome of measurements, and doing so only as probabilities. Attempting to stretch that model to solve philosophical problems can produce results that seem downright silly. One might opine that when the first sentient being opened its eyes and realized its own existence that the wave function of the universe collapsed to one where that being existed. A more realistic model might note that certain physical events are essentially a measurement, for example the replication of DNA. As each nucleic acid base is assembled there is a small probability that the wrong base will be chosen. That probability may be amplified by the presence of carcinogenic molecules or ionizing radiation but it is still a quantum phenomenon. In that sense, the evolution of life is the outcome of some huge number of individual quantum experiments, each with a randomized outcome. The wave function for the entire system would not predict any one outcome of evolution, but would incorporate all possible outcomes. In my mind (perhaps because I believe I have one), this does not correspond in any meaningful way to determinism. --agr 17:01, 6 November 2005 (UTC)
Thank you very much. I have been working for months to get this thing straightened out. I have been unwilling to say that something is wrong simply because I don't understand it. But what you say above is a clear statement of what I have gleaned from all my reading. P0M 18:47, 6 November 2005 (UTC)
In view of the discussion above, and the discussion that relates to Ingham's article on the Philosophical_interpretation_of_classical_physics assertions such as
Quantum mechanics, in isolation, is equally predictable. However combining the two gives probabilistic predictions.
should be expunged.
Ingham's assertions remain unsupported by citations relevant to the above statement. There are some physicists, such as Bohm, who have tried to assert the existence of "hidden variables" that would explain why what happens is not really probabilistic. That is only one out of several different attempts to explain what happened to our old ideas of cause and effect and/or to restore determinism to its original luster. P0M 00:28, 9 November 2005 (UTC)
Seeing no objection, I am going to delete that part. P0M 02:25, 10 November 2005 (UTC)
I've also deleted, as unsourced or original research, the line: "Since it is not possible to do an experiment without using classical coordinates of bodies and much of nature cannot be explained without quantum mechanics, the probabilities seem unavoidable." P0M 02:29, 10 November 2005 (UTC)
I don't agree that my sentence quoted above is original or in question. On the other hand I think the "hidden variables" theory mentioned in the article is an unsupported minority point of view and does not belong in the article. David R. Ingham 18:42, 15 August 2006 (UTC)
[edit] Questionable passage not improved by recent edit, was it?
The article currently says:
Even so, this does not get rid of the probabilities, because we can't do anything without using classical descriptions, but it assigns the probabilities to the classical approximation, rather than to the --quantum reality -- ++brain++.
Implicit in this statement is the idea that quantum scale events are absolutely deterministic, and that the uncertainty or indeterminancy comes in because of unknown factors introduced when macro scale factors are introduced. (See the history of this article, the additions of Ingham.) If I understand this interpretation of quantum experiments correctly, it would imply that, e.g., in the double-slit experiment the progress of a photon or electron from the emitter through the double slits is absolutely deterministic, but that when the particle shows up on the detection screen its location on that screen is due to unknown factors from the macro scale screen. The language in which Ingham has discussed his additions here and elsewhere has not been sufficiently clear to me to enable me to do more than take a stab at stating in other words passages such as the one in question here. However, replacing "quantum reality" with "brain" does not appear to me to be an improvement. Are there people who assert that the diffraction pattern formed in a double-slit apparatus is due to the brain? I think not. P0M 08:10, 17 November 2005 (UTC)
[edit] Ordering of Sections
I think this article would improve in whatever small way if the 'arguments against' section were at the very end, as is the format in the rest of the wikiverse. Capone 08:22, 17 November 2005 (UTC)
- Agreed - I've changed the article to reflect this. Visual Error 00:00, 23 January 2006 (UTC)
[edit] Determinism and generative processes
In regard to the position discussed in this section of the article, I am wondering whether any philosopher has investigated the impact that linking non-deterministic processes to the decision process would have. For instance, one engaging in some pursuit might wish to avoid unconscious stereotypical responses. S/he thinks, "I always do the same thing in this situation. I never sing on the bus. If I decide to try singing on the bus, I may pick my shots and avoid the very situation in which singing on the bus would possibly produce interesting results. So I will choose to sing only if my geiger counter (set very low) chirps." I can see where such a procedure might be very valuable,heuristically, but I'm not sure that it changes anything in regard to the free will quotient But if learning is a causal factor in future action,then the experimenter in this situation would seem to have the chance to add something to those causal factors that could not have been programmed in from the beginning of time. P0M 06:48, 20 November 2005 (UTC)
What happened to citations 7 and 8 in "Determinism and generative processes"?
[edit] Multi-deterministic position?
This section seems to me to be the weakest in this article. I've cleared up the anthropocentrism of its language and attempted to lay it out more neatly, but this seems to smack of a personal commentary, rather than a summary of a position advanced by one or more philosophers or scientists concerning determinism. For one thing, 'multideterminism' isn't a recognised term in this field (it isn't in the dictionary either); I've seen it used as a psychiatric term but I get the feeling that here its an ad hoc construction used to title someone's personal theory. Certainly, given the apparent lack of proper referential support, it seems strange to dedicate an entire subsection of the article to it.
Basically, this section needs references and external support if it is not to drag down the general tone of the article. The references to dualism are also slightly problematic, given the general disfavour which this theory of mind has amongst philosophers today. Any suggestions? Could it be subsumed into another section/strand of discussion? In all honesty I don't think it should even be in the article until it's got proper references and sources to back it up.Visual Error 16:08, 20 January 2006 (UTC)
- I favor deleting it, though this is partly my personal bias. For me, science does not need references if it is not disputed, but discussion of "souls of conscious beings" and "the creator of the universe" doesn't make sense unless I know who thinks that way. David R. Ingham 20:36, 20 January 2006 (UTC)
-
- Interesting. I never heard of deterministic souls before (although I suppose religious predestination would require it)--what do they do? I could be misunderstanding the section. It sounds like original research, but perhaps I should ask over in theology. WhiteC 23:13, 20 January 2006 (UTC)
-
-
- If it is entirely original research, surely it doesn't belong in Wikipedia. I have no problem with someone citing original research by a recognised authority, but I don't think it has any place being presented in Wikipedia (or, indeed, any encyclopedia article). If no one presents any objections, I'm going to remove the offending section. Visual Error 19:21, 21 January 2006 (UTC)
-
- I filled out that section, which is a report on the mainstream position of Christian philosophers in the Middle Ages, and also has resonances in one early Chinese school. It is not original research to report on information that is common knowledge among students of the history of philosophy, especially when references are provided. "René Descartes continues a train of thought that starts at least as early as Duns Scotus and runs through Suarez to affirm that 'the will is by its nature so free that it can never be constrained. (Passions of the Soul, I, art. 41). [[6]]" The Chinese matter is explicated at some length in D.C. Lau's "Introduction" to his translation of that book, p. 28ff., and references can be provided to the Mencius too. P0M 21:32, 10 February 2006 (UTC)
-
- See W.D. Ross's Aristotle, p. 82f, where he describes The Philosopher's teaching that acts of free will are uncaused causes in the Universe. P0M 17:09, 11 February 2006 (UTC)
[edit] http://en.wikipedia.org/w/index.php?title=Determinism&curid=47922&diff=39024682&oldid=38576430
I don't understand where you got the need for a god to start the universe, in the first place. Isn't the universe easier to start than a god?
- Is this a reference back to deism? WhiteC 19:39, 10 February 2006 (UTC)
- I removed reference to God, and replaced it with first cause. Is this theory supported by anyone worth mentioning in an encyclopedia, or should it stand on its own merits? WhiteC 15:25, 11 February 2006 (UTC)
-
- Somebody with a set of Copleston's history of philosophy series could easily come up with a number of citations. Of course Thomas Aquinas http://www.newadvent.org/summa/104401.htm is out of date, Aristotle is even more out of date, and Plato is hardly worth mentioning. ;-) P0M 16:54, 11 February 2006 (UTC)
[edit] Determinism, quantum mechanics and classical physics
I edited this section, which is a sub heading of "Arguments against determinism" to reflect more of a mainstream approach. I took out a paragraph on moral choice, which has noting to do with physics. I also took out the bit about what Professor B. once taught his class. --agr 21:25, 15 February 2006 (UTC)
That quantum mechanics is irrelevant to "moral choice" has been mentioned prominently in the literature, but I don't recall where I last saw it, at this moment. The argument goes something like, since the future is only theoretically predictable, given infinite computing power and information, physical determinism has no bearing on moral determinism. The "Physics and the Real World" article (though it was wrong on physical determinism) pointed out that homeostasis is one of the principles that separate microscopic physics from daily experience. That is, the presence of negative feedback on many levels, simplifies everyday life in a way that is too complicated to explain directly with physics, without the help of biology. David R. Ingham 05:42, 11 March 2006 (UTC)
If the edit you refer to is [7], then it does include some worthwhile deletions. David R. Ingham 06:08, 11 March 2006 (UTC)
Of course, the "Dr. Herbert P. Broida" quote is not the best reference, but would establish the fact that the probabilities are due to the classical approximation and not to qm in isolation, if other sources were not available. It was my best source at that time and where I first learned it. David R. Ingham 06:08, 11 March 2006 (UTC)
- Since you are the only witness who has come forth with this point of evidence, in the spirit of having an "interpersonal object" as the basis of inquiry, please supply a reference to a published source. ~~
[edit] http://en.wikipedia.org/w/index.php?title=Determinism&diff=43220914&oldid=42571548
This may not have been a legitimate edit, but the deleted part does not make much sense to me. David R. Ingham 05:25, 11 March 2006 (UTC)
- It seems to be perfectly clear both in terms of individual words chosen, and in terms of the way those words are arranged into sentences. It also seems to me to have something interesting and relevant to say. Rather than deleting such a passage, it would be better to bring it up for discussion, quote it here, and explain in concrete terms what you think the problem is. P0M 14:48, 11 March 2006 (UTC)
What I meant to say is that the who and why this edit was not clear to me, but I did not actually disagree with the deletion, so I didn't revert it. David R. Ingham 18:11, 12 March 2006 (UTC)
[edit] Questionable paragraph
For notions that the wave function is computable to rescue determinism, one must envision a single wave function for the entire universe, starting at the big bang. Putting aside the fact that computing the wave function for something as simple as a single uranium atom is far beyond any known technology and that the initial conditions at the big bang can never be known, such a "wave function of everything" would carry the probabilities of not just the world we know, but every other possible world that might have existed. For example, large voids in the distributions of galaxys are believed by many cosmologists to have originated in quantum fluctuations during the big bang. The "wave function of everything" would carry the possibility that the region where our Milky Way galaxy is located could have been a void and the Earth never existed at all. Even if one accepted such a "wave function of everything" as meaningful, it is difficult to understand how it could be reconciled with the concept of determinism.
" It stars alright, but then assumes the Many World idea. There must be resemblance between what did happen and what could have, and, since the quantum state includes everything that still exists, it must be there. Admittedly, qm provides a version of determinism that would only be of use to God, but does not allow Him to interfere with us. What all of this shows is that theoretical determinism in physics is not relevant to questions like moral responsibility. David R. Ingham 07:33, 11 March 2006 (UTC)
- I have put the quoted texts into "blockquotes" since as it was before the original passage ran almost seamlessly into the critique.
- The passage does not assume the "Many World idea."
- You say: "There must be resemblance between what did happen and what could have, and, since the quantum state includes everything that still exists, it must be there." I do not understand whether this sentence is intended as a statement of fact, a sarcastic paraphrase of the presumed belief or interpretation of the original passage, or what. You then continue with what I regard as a non sequiter. "Admittedly, qm provides a version of determinism that would only be of use to God, but does not allow Him to interfere with us." I doubt that you have inquired of God how his attempts to perform miracles have fared, so what can be the grounds for your assertion? You finish with another non sequiter that at least insinuates that whether I was pre-programmed from time 0 to kill and eat Bambi, I am in either case "morally responsible."
- The problem with the passage, as I read it, rests with its lack of citations. As is, it stands as an interesting piece of "personal research" at best. It could only be correctly written as a reflection of "Bohr, Born, and Bobrick," since it is not, afaik, even as widespread an opinion as the Copenhagen Interpretation. P0M 15:05, 11 March 2006 (UTC)
-
- I added the paragraph in question to balance statements in the article such as "So quantum mechanics is deterministic, provided that one accepts the wave function itself as reality..." I'd be happy to add citations, but I am not clear as to which parts of that paragraph are being questioned. Is it the statement that "For notions that the wave function is computable to rescue determinism, one must envision a single wave function for the entire universe, starting at the big bang?" If we are not talking about the wave function for the entire universe, than it seems to me to be up to the proponents of determinism to make clear which wave function they have in mind. The fact that a wave function carries the probabilities of all possible outcomes is pretty standard physics. I don't think it depends on any particular interpretation. Many cosmologist attribute large scale structure of the universe to quantum fluctuations shortly after the big bang. I'm not saying it's a settled matter, but I'll look for a suitable cite.--agr 00:00, 12 March 2006 (UTC)
The paragraph is not so bad, but something needs fixing. I admit that my "God" part is unusual. I am discussing the difference between what could in principle be predicted and what is reasonable to talk about predicting. Some explanations of the Judeo-Christian-Muslim god attribute unlimited knowledge, and unlimited ability to interpret it, to him. That sheds a very different light on qm, because "He" would be able to predict the future, including all the phases (except for one non-physical overall phase for the universe), most of which can never be known to humans. The time independent Schrödinger equation is explicitly deterministic. The equations of more complicated systems are much harder to integrate but, in principle, equally deterministic. The reason the future is not predictable is that our instruments and computers are finite. In fact they are constrained to be smaller than the universe they strive to predict. In that I am in agreement with the paragraph. What I call to question is "every other possible world". The universe's wave function describes only itself, and not, directly, any other possible worlds. David R. Ingham 05:03, 12 March 2006 (UTC)
-
- Thanks, it will be a few days before i can respond. --agr 05:30, 12 March 2006 (UTC)
[edit] POM comments
Point by point: (1) For notions that the wave function is computable to rescue determinism, one must envision a single wave function for the entire universe, starting at the big bang.
- (To agr:)That is a highly dogmatic statement. If Feynman said it, putting it in the article is legitimate provided it is cited. If P0M said it, it's personal research at best.
(2) Putting aside the fact that computing the wave function for something as simple as a single uranium atom is far beyond any known technology and that the initial conditions at the big bang can never be known, such a "wave function of everything" would carry the probabilities of not just the world we know, but every other possible world that might have existed.
- (To agr:)Even the "facts" that you are putting aside are not things that can stand on your authority. Personally, I would let them stand, but I am lax about these things. Your assertion that "a 'wave function of everything' would carry the probabilities of not just the world we know, but every other possible world that might have existed," sounds plausible to me, but only plausible. Where does this assertion appear in the work of any recognized physicist?
(3)"For example, large voids in the distributions of galaxys are believed by many cosmologists to have originated in quantum fluctuations during the big bang."
- (To agr:)Since "many cosmologists" believe it, it should be easy to document.
(4) "The 'wave function of everything' would carry the possibility that the region where our Milky Way galaxy is located could have been a void and the Earth never existed at all."
- (To agr:)Documenting (3) would probably uncover documentation for (4).
(5) "Even if one accepted such a 'wave function of everything' as meaningful, it is difficult to understand how it could be reconciled with the concept of determinism."
- (To agr:)It looks to me like you set up a straw man in the beginning, and here your burn it.
(To agr:)Your language needs to be tightened up to make it amenable to logical analysis. You seem to be asserting that: If there is a Single properly formulated wave function for the entire universe starting at the big bang, then computing that wave function will "rescue" determinism. And you also maintain that If there is a Single properly formulated wave Cfunction for the entire universe starting with the big bang, then it is not the case that Computing the wave function will "rescue" determinism. To reduce two statements to symbolic form you have:
- S-->C
- S-->~C
That's like the father who promises his child, "If I give you a Silver dollar, then I'll give you a copper coin," and "If I give you a Silver dollar then I will not give you a copper coin." As long as the father only hands out copper coins, nobody can call him a liar. On the other hand, if he gives his son a Silver dollar and does not give him a copper coin, then he lied when he said the first sentence, but if he does give him a copper coin, then he lied when he said the second sentence. The ordinary purpose of if-then sentences is to communicate some reasonable expectations, e.g., "If you feed my dogs, then I will pay you a dollar a day per dog." If a parent says that to a child, and tells the truth, then the child has a good basis for deciding whether it is worthwhile for him to feed the dogs. But if the parent adds, "And if you feed my dogs, the I will not pay you a dollar a day per dog," the parent has made a self-contradictory assertion and the child doesn't know what to expect. The child thinks, "Even if I go through with it and actually feed the blasted dogs, there is no telling whether I'll get paid or not. The child doesn't know whether doing it or not doing it is a good idea because by making a contradiction the parent has "unlinked" the consequences, i.e., said, in effect, "My words do not make any difference. Just wait and see what happens." Your statement reminds me of the kind of fairy story where the king says, "If you were to prove capable of weaving a single flawless thread in one night from a thousand pounds of silkworm cocoons, then I will release you from the staircaseless tower -- or maybe I won't."
(To agr:)Here is something else: Can the statement that "There is a single properly formulated wave function for the entire universe starting at the big bang," possibly be proven wrong? If there is no way that it might be proven false, if it's like the statement, "The streets of Heaven are paved with gold," then we are not dealing with a scientific matter.
(To agr:)Maybe you are trying to say something more interesting and I simply haven't guessed the correct formulation. P0M 06:49, 12 March 2006 (UTC)
[edit] Ingram comments
[edit] Point 1
No, I think that the fact that the universe is completely described by a celestial wave function is perfectly orthodox physics and that every high-school student should be able to say it if asked in a test. David R. Ingham 07:58, 12 March 2006 (UTC)
- I am asking that the original author, or anybody else for that matter, provide substantiation, citable evidence. My opinion is not evidence. If you have peer-reviewed publications to your credit that establish any of these points you may cite those publications. Just giving your opinion here stands at best as "personal research." P0M 18:57, 12 March 2006 (UTC)
Perhaps, that may not be the ultimate form that physics will take, or there may be inconsistencies between genera relativity and quantum mechanics, but that is the description that well established physics gives.
- Prove it, don't just assert it. P0M 18:57, 12 March 2006 (UTC)
[edit] Point 2
No I didn't mean to say I greed with that part. Knowledge of a wave function may give clues to those of other things, but it does not actually contain information about anyting else that exists or could exist. David R. Ingham 18:06, 12 March 2006 (UTC)
- I was not addressing my point 2 to you, personally. What the original writer, or someone else, needs to do is to prove that Dirac or Feynman or Einstein or somebody with generally recogized authority has demonstrated the truth of this statement, or at least asserted it is true so that we can say, "According to Dirac..." P0M 19:02, 12 March 2006 (UTC)
[edit] Point 3
That sound like he could justify it, though I would like to see it too.
- Personally, in editing articles I tend to concentrate on things that I can refute with evidence, put things that questionable up for other people to weigh in on, and leave things that seem to me to be "general knowledge" alone. (Ideally, a statement like, "Wolf 359 is the star closest to our own," should be cited. But I'm pretty sure that it is correct unless an even closer star has been found, so I'd just pass on by.) That being said, any statement should have evidence behind it. In this case, we are not dealing with statements like, "The sky appears blue," so we need proof. The citation is for the article, for the general well-informed reader, not for me. P0M 19:26, 12 March 2006 (UTC)
[edit] Point 4
That is point 2 again. That is what seems like a many worlds interpretation to me, though perhaps that is not the best name for it.
[edit] Point 5
I think that may be about something I said or agree with again.
The time dependent Schrödinger equation is explicitly deterministic. That is why I put in a picture of it. The equations for many body wave functions are derived from it and so are also deterministic, in the sense that the wave function progresses according to its Hamiltonian in a way that is determined by the mathematics. The physics determinism question seem to revolve around the issue of whether one accepts the wave function as reality. To one who retains the notion that reality can only be something that is describable in classical physics or ordinary language, there can not be determinism because of the uncertainty principle. This seems to be mostly a question of definition rather than of physics.
(One of the first things that prevents one from actually being able to integrate many body equations is that they tend to be chaotic. Chaos is a property of classical mechanics, not shared by quantum physics, but according to the correspondence principle, qm must be quasi-chaotic. So when things (like nuclei) get big enough to start acting classically the qm often becomes very difficult. The relevance of this is that it helps to explain why things fall apart so quickly when one tried to merge classical and quantum descriptions, as in experiments.)
- Are you using "chaos" in its ancient Greek form? Or are you using the ineptly chosen term that arose from some of the attempts to model physical systems with equations whose solutions are not easily anticipated without doing the math? In the "chaos" of "chaos theory" there is nothing indeterminate going on.P0M 10:38, 26 March 2006 (UTC)
I still think the paragraph should not be in the article, as it was, as it contains at least some errors.
- Do you still view it as wrong in all the respects you originally criticized? P0M 10:38, 26 March 2006 (UTC)
Sorry I am not organizing these comments better. Maybe they should be pasted in where the fit with the above points. David R. Ingham 19:14, 12 March 2006 (UTC)
[edit] agr reply
First of all, I've added some headings to try to make this discussion more readable. Hope that is ok with everyone.
Next some general remarks. I added the paragraph in question, along with some other edits, to try to provide balance in this article. I believe the article had a very pro determinism POV. That is not all together bad, I think it is one of Wikipedia's strengths that articles are often first written by proponents, so one gets an undiluted version of their arguments. Nonetheless it is important that opposing viewpoints should be presented as well. And while I agree with the need for citation, that standard should be applied uniformly. There are only three references given in the entire article as it stands today, one to a 1966 physics textbook (with no page cites), one to an unverifiable recollection of a lecture given no later than 1978, and one to a recent article. The last reference, however, includes the highly POV statement "This article seems to make the common error of thinking quantum probability goes on in nature...".
In particular I tried to address two clearly stated themes in the original article: that quantum effects only affect the world at the microscopic level and that "...quantum mechanics is deterministic, provided that one accepts the wave function itself as reality..." The later is reinforced by a depiction of the time dependent Schrödinger equation in all its glory, which lends an aura of credibility that I think demands a more careful explanation of what the wave function means. Let me be clear that I am not accusing anyone of bad faith; this is just the nature of the editorial process at Wikipedia unfolding.
[edit] Point 1
"For notions that the wave function is computable to rescue determinism, one must envision a single wave function for the entire universe, starting at the big bang." Here I am simply making explicit what the article implies when it states "...quantum mechanics is deterministic, provided that one accepts the wave function itself as reality..." The whole notion that physics supports determinism is based on the assumption that the laws of physics apply to the universe as a whole, in a form unbroken since the beginning of time. I believe Mr. Ingram acknowledges that.
- As I think you point out, the article is no model for well-grounded assertions. So if you are making something explicit that the article implied you may be making explicit an ungrounded implicit argument. Anyway, what would happen if you examined the converse, the idea that there was, ab initio, two or more wave functions? If you could show why such an initial condition would inevitably lead to a non-deterministic universe, then you'd have forced the issue to a kind of clarity that I doubt it has now. P0M 06:15, 22 March 2006 (UTC)
[edit] Point 2
"Putting aside the fact that computing the wave function for something as simple as a single uranium atom is far beyond any known technology and that the initial conditions at the big bang can never be known, such a "wave function of everything" would carry the probabilities of not just the world we know, but every other possible world that might have existed. " There are a couple of statements here: the present unfeasibility of computing the wave function for a uranium atom, is uncontroversial. The last I heard, the current state of the art is 3-particle systems. See, for example [8].
As for 'such a "wave function of everything" would carry the probabilities of not just the world we know, but every other possible world that might have existed.', this is the nature of the wave function. For system of N-particles (ignoring spin and other complexities), the wave function is a function on the a vector space of 3N dimensions. It value at any point is generally interpreted as the probability amplitude of each of of the N particles being near the N different points in 3-space. See for example, wave function#Two distinguishable particles in three spatial dimensions or The Feynman Lectures on Physics, Volume III pp. 1-10, 16-5, 21-6.
- I personally have no problem with your point 2. I think the citations you suggest here should be added as footnotes to the article. Have the hidden variables disciples faced up to this problem? Do they claim that "if we only knew what God must know" we would not be dealing with probabilities but with certainties, with deterministic results? P0M 06:22, 22 March 2006 (UTC)
[edit] Points 3 and 4
"For example, large voids in the distributions of galaxys are believed by many cosmologists to have originated in quantum fluctuations during the big bang." Again this is standard stuff. See the articles Cosmic inflation and Primordial fluctuations, both of which have extensive references. And just last week, NASA announced new results from their WMAP project which they claim allows measurements of conditions during the first picosecond after the big bang. From NASA's press release; "The new WMAP data, combined with other cosmology data, also support established theories on what has happened to matter and energy over the past 13.7 billion years since its inflation, according to the WMAP researchers. The result is a tightly constrained and consistent picture of how our universe grew from microscopic quantum fluctuations to enable the formation of stars, planets and life." http://map.gsfc.nasa.gov/m_or/PressRelease_03_06.html
- Again, I have no problem with the assertion. But for the average well-informed reader this is quite likely not "standard stuff." It just needs to be pegged down to somebody reliable. The NASA quotation should be good enough for anybody. But, again, it might be worthwhile to consider the converse. If there had been no quantum fluctuations, if the early universe had been a Newtonian universe or if in some other way non-quantum considerations were involved, could the "clumping" have another kind of explanation? If I remember correctly, astrophysicists were unable to account for the emergence of features from a uniform primal plasma until they gave thought to QM. P0M 06:30, 22 March 2006 (UTC)
[edit] Point 5
"Even if one accepted such a "wave function of everything" as meaningful, it is difficult to understand how it could be reconciled with the concept of determinism." I think I am drawing a valid conclusion here, but I am happy to drop the sentence. To answer some of POM's other concerns, I am not saying a "single properly formulated wave function for the entire universe starting at the big bang will "rescue" determinism" but that such a wave function is implied by the notion "...quantum mechanics is deterministic, provided that one accepts the wave function itself as reality...". It is certainly possible to entertain such a wave function as a mathematical idea. However I am trying to point out that not only is computing such a wave function is completely hopeless; even if one could it would tell almost nothing about the world as it is today because it carries the probabilities of all outcomes of the big bang, not just the one we know.
- I'm editing this section by section, so I can't scoll up to point 1, but I think I was quoting the orignal paragraph. Anyway, if you put it in the form you use to explain yourself above, it might be stronger. Anyway, the issue would not be whether the wave function could be computed, because just as in the case of Uranium decay you brought up above, the uranium works itself out and doesn't care whether we can calculate what it is going to do. People could use the expedient of appealing to an omnipotent, omniscient God would could calculate faster than the physical universe could evolve, and say, "The rule was set from the beginning of time, and it worked its way out just as God could have calculated it would. The fact that humans are too limited to calculate such a big problem is irrelevant. It's the certainty of what is going to happen that matters." I gladly accept your assertion that "it carries the probabilities of all outcomes of the big bang, not just the one we know," but I think the hidden variables group would say that there is only one outcome -- if only we could know. I've never been able to see anything that would motivate acceptance of the idea that there are these hidden variables except that people are unwilling to accept a dicing God. P0M 07:03, 22 March 2006 (UTC)
So here is a revised version of the paragraph, which I propose to restore to the article, along with the references I cited here:
Asserting that quantum mechanics is deterministic by treating the wave function itself as reality implies a single wave function for the entire universe, starting at the big bang. Putting aside the fact that computing the wave function for something as simple as a single uranium atom is far beyond any known technology and that the initial conditions at the big bang can never be known, such a "wave function of everything" would carry the probabilities of not just the world we know, but every other possible world that could have evolved from the big bang. For example, large voids in the distributions of galaxys are believed by many cosmologists to have originated in quantum fluctuations during the big bang. (See cosmic inflation and primordial fluctuations.) The "wave function of everything" would carry the possibility that the region where our Milky Way galaxy is located could have been a void and the Earth never existed at all. (See large-scale structure of the cosmos.)
--agr 21:52, 19 March 2006 (UTC)
- I think your paragraph might be stronger if you were to delete: "Putting aside the fact that computing the wave function for something as simple as a single uranium atom is far beyond any known technology and that the initial conditions at the big bang can never be known,"
- I have trouble editing this article since I am one of those who thinks of the human mind as being something like HAL in 2001 -- something constructed by largely knowable physical processes, maybe even made by assembling off the shelf parts, and yet something that can work out its own detachment from the forces that created and seek to control it. But even so I should try to do more to make sure it gets better citations -- even where I don't like the conclusions.P0M 07:03, 22 March 2006 (UTC)b
I've restored the paragraph in questions after making the deletion you proposed. Thanks for the help --agr 17:50, 24 March 2006 (UTC)
I am still not happy with it. David R. Ingham 05:44, 26 March 2006 (UTC)
- Isn't that because you believe there are hidden variables that would make things turn out with certainty?P0M 10:25, 26 March 2006 (UTC)
[edit] Small yet important mistake
The article currently says: "the equations of Newtonian mechanics can exhibit sensitive dependence on initial conditions, meaning small errors in knowledge of initial conditions can result arbitrarily large deviations from predicted behavior."
… and there's a link on the terms "sensitive dependence on initial conditions" which leads to the butterfly effect. The butterfly effect (Lorenz) has nothing to do with sensitivity dependence on initial conditions (Hadamard). Lorenz wrote in The Essence of Chaos that the butterfly could also stop astorm from happening: there's nothing here related to sensitivity, on the contrary, Lorenz's argument says that the butterfly and the storm cannot be linked in any way (hence chaos), whereas sensitivity is about small causes and considerable results, yet the causality link is preserved. User:phnk
- It's my understanding the the so-called Butterfly effect is simply a dramatic way of explaining "sensitive dependence on initial conditions." If one runs two numerical simulations of the atmosphere with initial conditions identical, except for a disturbance in one on the order of a butterfly flapping its wings, they will soon diverge to such an extent that one may predict a storm where the other does not. No one suggests storms are caused by butterflies. If you disagree, I suggest you discuss it at the butterfly effect page, which takes the same position and is presumably watched by more people expert in the field. --agr 13:33, 27 March 2006 (UTC)
-
- I would like to insist: as indicated earlier, Lorenz wrote the same butterfly could prevent a storm from occurring. What you are expressing is the argument by Hadamard that a small deviation in the initial conditions may lead to an indefinite variation in the results: this deviation is not chaotic. User:phnk
-
-
- Google Lorenz and "butterfly effect". Or just see one site that popped up for me: http://www.pha.jhu.edu/~ldb/seminar/butterfly.html which says: "His simple model exhibits the phenomenon known as 'sensitive dependence on initial conditions.' This is sometimes referred to as the butterfly effect." P0M 10:31, 28 March 2006 (UTC)
-
-
- Right. I've compiled software to work a few of the equations that demonstrate this situation. The computations could be done with pencil and paper, but nobody suspected anything because you need to do (and, to be helpful, plot) quite a few computations before you notice that things are going strange. When computers came into use to do weather predictions
somebodyLorenz was experimenting with one formula and did two runs with initial numbers that differed only because the experimenter truncated his Xs and Ys, i.e., he may have done run 1 with x = 1.04958694 and the second time he used x = 1.04959. He was surprised to find differences of major proportions down the line. In that model of the atmosphere, there were three causal factors, and varying just one of them a tiny bit would result in vastly different results down the line. "Chaos" is an unfortunate name for this phenomenon. It may have been chosen because one of the situations in which these considerations become important is the change from laminar flow to turbulent flow. When you turn on the water faucet in your bathtub you can keep the rate of flow slow and get a smooth column of water coming out from which it would be easy to fill a bottle. As you gradually open the faucet things stay laminar for a while and then suddenly you get turbulent flow and you need a bucket instead of a bottle to catch the water because it is going this way and that way "chaotically." Actually, it's the same physical process and the same equation, but you reach a cross-over value after which the outcomes can be computed but the rate of change in predicted values gets much higher. So you get "chaotic" flow. P0M 15:39, 27 March 2006 (UTC)
- Right. I've compiled software to work a few of the equations that demonstrate this situation. The computations could be done with pencil and paper, but nobody suspected anything because you need to do (and, to be helpful, plot) quite a few computations before you notice that things are going strange. When computers came into use to do weather predictions
-
-
- Perhaps the use of the correct mathematical term 'ill-conditioned' would be better than 'chaos' then.--WBluejohn 20:31, 2 April 2007 (UTC)
-
[edit] Eliminative Determinism?
Newly added section on "Eliminative Determinism" appears to be original research. At the least it needs a citation other that a self-published web page. Even if it has been independently published elsewhere, it's not clear it deserves so much space in this article.--agr 22:36, 30 May 2006 (UTC)
[edit] Worth noting...
Using determinism as a basis for predicting the future, would not knowledge of the predicted future then allow one to change the predicted future (simply by consciously avoiding predictions), therefore invalidating the original predicted future and revealing determinism to be an inaccurate basis for prediction? Physical impossibilites aside, of course.
- You could only change something if it weren't predetermined anyway. The ability of individuals to change the future arbitrarily assumes free will, which gets into the whole free will -v- determinism thing. WhiteC 18:23, 14 Jan 2005 (UTC)
-
- Another mention of the 'free will' vs determimism thing. Don;t forget QM doesn't allow free will either.
-
- I'm not sure if this is the right proceedure for reviving older discussions - I've moved this section from the archive. Anyway, carrying on from the two previous entries:
-
- The idea of determinism assumes that the future is determined, therefore if such a computer could be contrived that had the memory and ability to capture and store all the present information/rules (current state, basically) of the universe, it could predict the future with certainty for all eternity based on those conditions or causes (given some unimaginable processing power). Obviously such a computer or form of intelligence is not conceivable, but it still poses a theoretical problem.
-
- The problem with this is that given an intelligent life form knows the future, it will then be possible for it to change things, rendering that future no longer true. I don't believe that free will could stop this - even if we have no free will our genetic make-up could still cause us to behave in this way. The key problem here for me is that such a machine would then have to try to take its own existence into account, which if it lead to a change in the future from its own determination of the future (a person acting on that knowledge, for example), the results could not be computed as a series of infinite loops would be created. Therefore such a calculation could never be made even in theory, if it allowed for anything acting upon that prediction in some way as to change it.
-
-
- Answer: It may seem a bit paradoxical at first glance, but in theory this is how it would work. If an intelligent life form had precise awareness of the future and took action upon the present to alter that specific course of progression which his foreknowledge had revealed, the following would hold true. As you've indicated above, the course of progression in question could in fact become altered (from what this acquired information seemed to decree) by actions taken because of the awareness itself. However, the crux here is that if it were altered, the altered course of progression would not truly be an alteration of the steadfast determinative pattern. Instead, it would be the opposite: a forever-standing preset part of the progressive determinate universe. As would the intelligent life form's foreknowledge of what would have occurred absent 1) his foreknowledge, 2) his altering action, and 3) the subsequent effect upon the target outcome. The key is that any precise knowledge of the future and all that would stem from it would as well be predetermined. As such, any changes stemming from such conditional foreknowledge would in fact not be changes from what has always been the inflexible determinate course of progression. Ergo, in order to have a precisely accurate determinative awareness of what is to occur, any machine or person's calculations would be required to factor in *themselves*, their own foreknowledge, and, most importantly, whatever actions they would in fact take upon events leading to the end point in question.
-
-
-
- Question, what is the smallest item that could be used to store information about the smallest possible particle? The answer is surely 'the smallest possible particle'. So the best computer we could hope to create that could store the information about the mass, momentum, location, etc of every particle in the universe would itself have the mass of the universe. In which case we now have a universe with twice as much mass, so where do we strore the info for the extra bits (another universe mass computer), and so on. The only option is to assume that the computer IS the universe. It is the only thing that can predict the future, and it is doing exactly that - in real time. --WBluejohn 20:40, 2 April 2007 (UTC)
-
-
- This is just the same logic as travelling back in time, in fact it is basically the same thing, as you are bringing the future 'back in time' to the present. I am unsure whether it is possible, only in theory, to calculate the future without acting upon that knowledge to change it. Richard001 09:27, 25 July 2006 (UTC)
-
-
- "Would not knowledge of the predicted future then allow one to change the predicted future (simply by consciously avoiding predictions), therefore invalidating the original predicted future and revealing determinism to be an inaccurate basis for prediction?"
-
-
-
- Not really. If the original prediction was correct, it would have taken this into account. So if there is any deviation between the prediction and the actual outcome, then that means that the prediction was wrong. 12.10.248.51 (talk) 17:21, 2 April 2008 (UTC)
-
[edit] New material on Quantum reality
I was trying to find a place where this fits in, but there is so much QM on the page that I decided to add it at the bottom.
Here is some more new source material, from The Road to Reality by Roger Penrose, 2004, section 21.6, (p. 508 in my copy):
- If we are to believe that anyone thing in the quantum formalism is 'actually' real, for a quantum system, then I think that it has to be the wavefunction (or state vector) that describes quantum reality. (I shall be addressing some other possibilities later, in Chapter 29; see also the end of 22.4.) My own viewpoint is that the question of 'reality' must be addressed in quantum mechanics—especially if one takes the view (as many physicists appear to) that the quantum formalism applies universally to the whole of physics—for then, if there is no quantum reality, there can be no reality at any level (all levels being quantum levels, on this view). To me, it makes no sense to deny reality altogether in this way. We need a notion of physical reality, even if only a provisional or approximate one, for without it our objective universe, and thence the whole of science, simply evaporates before our contemplative gaze!
The review of this book in Physics Today was quite favorable and indicated that the author is of considerable status in the physics community. This paragraph was clearly carefully considered.
Does this settle all our disputes about quantum reality, or should I continue to look for additional material here or elsewhere? David R. Ingham 19:27, 15 August 2006 (UTC)
- Our article on Roger Penrose points out that he is "highly regarded for his work in mathematical physics" but has many controversial views, particularly on the relationship between QM and human consciousness. The quote you provide merely expresses a desire for a theory of quantum reality. I don't read in it any suggestion that he is proposing one or that he thinks it would be deterministic. There is no shortage of quotes from physicists who have philosophical concerns about QM. But it remains the most successful physical theory ever and I think it is fair to say that that few physicists believe QM will some day be supplanted by a deterministic theory.--agr 11:44, 16 August 2006 (UTC)
That brings us to my other point, which is that QM is deterministic. See the quote from from Steven Weinberg that I put under quantum reality above.
Am aware that many, even including physicists, require that "reality" be classical. You do have the right to your own definitions. I see that my last sentence was optimistic. David R. Ingham 21:44, 21 August 2006 (UTC)
- You too are entitled to your views and I don't doubt you can find physicists to support you, but your claim that "QM is deterministic" is not the majority view. See Interpretation of quantum mechanics. --agr 17:45, 25 August 2006 (UTC)
Yes I have been reading the page on Interpretation of quantum mechanics and objecting to it for some time. I think most physicists will agree that QM is internally, that is mathematically, deterministic, until one starts to "interpret" it. The disagreement is that I prefer the word "approximate". David R. Ingham 22:46, 26 August 2006 (UTC)
- You prefer the word "approximate" to... what? Penrose's "provisional"? P0M 16:45, 11 November 2006 (UTC)
Even Penrose speaks of the "R process" as though it were physical and speculates that it goes on in nature, so you may be right about the majority. David R. Ingham 22:58, 26 August 2006 (UTC)
- This is how I see it. QM on its own is certainly deterministic - in the same sense as any other consistent and mature mathematical theory. What is not deterministic is our application of QM to measurements. The transition between quantum and classical (in that order) is what makes results not fully determined. Karol 08:54, 27 August 2006 (UTC)
[edit] Deleted Verse
This didn't seem to relate to anything in a strict philosophical sense, so I deleted it from the "philsophy of determinism" section:
-
-
- With Earth's first Clay They did the Last Man's knead,
- And then of the Last Harvest sow'd the Seed:
- Yea, the first Morning of Creation wrote
- What the Last Dawn of Reckoning shall read.
-
- (Rubaiyat of Omar Khayyam, LIII, rendered into English verse by Edward FitzGerald)
-
-
[edit] Removed supercausality section
Supercausality In special relativity the energy-momentum relation, which relates the energy of an object (E) with its momentum (p), and mass (m), where c is the speed of light: E2 = p2c2 + m2c4, has a dual energy solution:
±
one positive + E, which moves forward in time (causality), and one negative − E, which moves backward in time (retrocausality).
This equation describes events as the result of causes which propagate from the past to the future (causality) and causes which propagate backwards in time from the future to the past (retrocausality)/attractors. Einstein used the term Übercausalität (supercausality) to refer to this new model of dual causation.
According to Chris King all living systems would constantly be faced with bifurcations among causes (+E) and attractors (-E) forcing the system into a constant state of choice, a state of free will, which would be common to all the levels and structures of life, from molecules to macrostructures, and organisms.
However, causes from the future propagating backward in time would as well be precise results of their determinants. A causal system (+E) and an attractor system (-E) each imposing influence on the disposition of the same event is notably analogous to two causal systems (+E) combining to affect a single outcome. Just as events stemming from antecedents are considered products of multiple contributors, events determined by past and future causal systems together would be products of a combined influence from both directions. The effect ratios of past influential factors to future influential factors would be firmly established by the deterministic pattern. As with determinism from one direction, the effect would be a precise inflexible result. In accord, the process of choosing would be bound by strict cause and effect governance. A decision is nothing more than a result of its contributors, irrespective of whether those contributors impose their effects from the past, or from the past and future together.
I removed the above section. Similar material has been inserted into several pages. It has been unreferences whereever it appears. As best as I can determine, it is based on somewhat dubious fringe theory with little or no mainstream coverage. In WP, it has spawned AFDs at retrocausality and supercausality, and a partial rewrite at conciousness. I've inserted the text here in case there is some application or validation for the material to this topic that I have overlooked. Serpent's Choice 10:52, 31 December 2006 (UTC)
- I have again removed this section. There are several concerns I have over its verifiability and accuracy. It is correct that this important physics equation has negative-energy solutions. These equate to the existence of antimatter. It is true that Feynman modeled the behavior of antimatter as normal matter moving backward in time, but this modeling system does not imply anything resembling the "attractor system" framework discussed above. Nor is Einstein's use of "Übercausalität" thought to imply such a structure. I am unable to find any reference to the author mentioned, Chris King, in physics or philosophy literature. The end result is that this passage is unsourced original research. Please discuss sources for this material before including it in articles in its current condition. Serpent's Choice 00:08, 9 January 2007 (UTC)
[edit] "Scientific Determinism" and First Cause
The "Scientific Determinism" section gives an unclear treatment of the First cause issue, which is dealt with much better in the First Cause section. The claim at the end is at least bordering on WK:OR. The scientific determinism page linked in currently contains suspected OR material.1Z 01:40, 13 January 2007 (UTC)
Merge with Firs Cause?1Z 21:56, 16 January 2007 (UTC)
[edit] First Paragraph.
The current 1st para is flawed in two ways;
Determinism is the philosophical proposition that every event, including human cognition, decision and action, is causally determined by an unbroken chain of prior occurrences. No wholly random, spontaneous, mysterious, or miraculous events occur. The principal consequence of deterministic philosophy is that free will (except as defined in strict compatibilism) becomes an illusion; this philosophical belief is known as hard determinism. In contrast to determinism is libertarianism, which is the doctrine that voluntary actions are caused by the self; and indeterminism which is the theory that some or all events are not completely determined.
The passage in bold is somewhat WK:POV and also dealt with much better under the "nature of determinism" section.
It also fails to clarify the difference between determinism and fatalism. The question of the relationship between determinism and predictability should also be dealt with.1Z 17:43, 16 January 2007 (UTC)
I have shortened the first paragraph. It is not possible or desirable to explain all the complexities of determinism-libertarianism-compatibilism in the introduction. 1Z 21:51, 16 January 2007 (UTC)
[edit] "A Multi -deteministic model"
"Multi-determinism" is not an accepted term. The section should be called somehting like "psycho-physical causation"
Some determinists argue that materialism does not present a complete understanding of the universe, because while it can describe determinate interactions among material things, it ignores the souls of conscious beings. By 'soul' in this context is meant an autonomous immaterial agent that has the power to control the body but not to be controlled by the body of which there is no evidence (this theory of determinism thus conceives of conscious agents in dualistic terms).
The bolded passage appears to be an attempt at WK:NPOV. The section certainly needs some balance, but the status of the "soul" in the face of brain science should be given a fuller mention.
However, determinism is not necessarily limited to matter; it can encompass energy as well.
Not relevant to anything -- delete.
The question of how these immaterial entities can act upon material entities is deeply involved in what is generally known as the mind-body problem.
amend to
The question of how these immaterial entities can act upon material entities is deeply involved in what is generally known as the "interactionist dualism" solution to the mind-body problem.
It is a significant problem which has yet received no answer within the universe of discourse related to the physical universe.
Of course, physicalists simply reject the soul. Delete.
1Z 18:01, 16 January 2007 (UTC)
[edit] Albatross
Determinism has some inherent self-contradictions. It is based upon a logical conception of the universe following rational chains of cause and effect. Although some may like to think of logic as objective, the answers it comes to are derived by applying it to purely subjective data: that of our senses. As our thought patterns themselves are largely based upon our perceptions of the outside world, it is perfectly possibly that logic itself is subjective. Now, without any application of logic, people will tend to naturally believe themselves in control of their own actions to some extent, independant of what has already been. What is to say that this natural perception is any less to be trusted than those arrived at by means of logic? So determinism could be seen as defying one perception with another, which it holds as somehow closer to objective truth, when in reality, this percepetion is also subjective to at least some extent. -Albatross thief1Z
- None of that has anything to do with the writing of this encyclopaedia article. Please read our Wikipedia:Talk page guidelines. Uncle G 14:50, 15 March 2007 (UTC)
[edit] I don't understand two of the "Arguments against determinism"
The first argues about how morality would be moot if there's no free will... How is that an argument against [the validity] of determinism?
The second talks about physical phenemona to which a set of deterministic rules was not entirely found.
But determinism is "defined as the thesis that there is at any instant exactly one physically possible future" and not something like "everything can possibly be foreseen given all variables and rules and all variables and rules of the Universe can be discovered and acquired some day". Thus the question of what humans can or cannot predict seems moot and I fail to see how is it as an argument against the validity of determinism...
"(Perfect) predictability implies strict determinism, but lack of predictability does not necessarily imply lack of determinism."
Those two sub-sections however are quite intersting and relevant but shouldn't they be relabeled/relocated?
Thanks
Miguelrj 22:59, 9 April 2007 (UTC)
- "How is that an argument against [the validity] of determinism?".
- It is an argument if you think morality is not moot. Otherwise it fails.
-
- I see it as more of an argument against the assumption of certain moral conclusions from Determinism (which don't necessarily follow, I think) than one against its validity proper. As a logical counter-argument, it's simply fallacious.AoS1014 16:20, 19 May 2007 (UTC)
- "defined as the thesis that there is at any instant exactly one physically possible future" and not something like "everything can possibly be foreseen given all variables and rules and all variables and rules of the Universe can be discovered and acquired some day".
- If determinism cannot be asserted as true on the basis of successful prediction, on what other basis can it be asserted? This argument does not show determinism to be false so much as problematic, or "moot". Note that it is far from commonly agreed that determinism is true-unless-proven-false.1Z 01:02, 10 April 2007 (UTC)
-
- It's worse than that. The thesis that "there is at any instant exactly one physically possible future" has testable consequences that have not been observed in the Bell test experiments.--agr 11:43, 20 May 2007 (UTC)
[edit] First Cause
The article currently does not provide a counter-argument for First Cause, which is given in the First Cause article. Namely, that there's no logical reason to why the First Cause would itself not be caused; or, in other words, that there's no logical reason to simply "admit an exception" to causality, assuming it to be true in all other cases (and abandoning the assumption of causality would, of course, destroy the whole argument for assuming a First Cause). I'd add it myself but I don't have appropriate sources at hand. AoS1014 16:00, 19 May 2007 (UTC)
Agreed. The entire First Cause section looks like original research to me; the whole thing is just an uncited argument for the existence of an uncaused First Cause. I have particular problems with the passage "(2) There is no event A0 prior to which there was no other event, which means that we are presented with an infinite series of causally related events, which is itself an event, and yet there is no cause for this infinite series of events", which seems to dismiss without adequate justification the possibility of an infinite series of events by writing off (without explanation) such a series of events as an "event" itself in need of explanation. But wiki is not the place for our own arguments and counter-arguments. I think the "first cause" section would be better rewritten as equally presenting the three possible answers to the cosmogenic question (a first cause, an infinite series of events, or a causal loop) in equal light, citing various philosophers who have presented arguments for these three positions. -Forrest 17:07, 12 December 2007 —Preceding unsigned comment added by 70.177.4.159 (talk)
[edit] Radical Behaviorism?
Radical Behaviorism is a philoophical system that supports a non-mechnistic form of determinism. It is not "scientific determinism" as I read the page. Unless I am mistaken this page summarized only the mechanistic position, and then developed only that position.
--florkle 02:47, 21 May 2007 (UTC)
- Really, I believe Skinner relies on a radical interpretation of a Hobbesian argument related to the subject just above this one. I can look into it if you like. --Kenneth M Burke 23:57, 26 May 2007 (UTC)
[edit] Weasel Words
I added a "Weasel Word" warning to the "Argument from Morality" section of the page. The section continuously uses hypothetical people and sources rather than real ones. Additionally, it also says that 'determinists' view morality as having a logical basis, as if this were a worldwide view of determinists. If there were other weasel-wordy sections that I did not mark, it it not necessarily bias; that section was the only one I read that was very bad, although I did not read every small section of the page. Justin Satyr 19:47, 28 May 2007 (UTC)
- "The section continuously uses hypothetical people and sources rather than real ones".
- Then it needs "who" tags or "fact" tags. I am removing the weasel banner.
1Z 22:52, 28 May 2007 (UTC)
[edit] Inaccurate report on scientific consensus on Bell issue. Not objective
If nobody is going to do this, at one point I will be going to change the following passage (so please, do contribute with what you think, and with sources): "There have been a number of experiments to verify those predictions, and so far they do not appear to be violated (sic!) although many physicists believe better experiments are needed to conclusively settle the question. (See Bell test experiments.)"
There are two views on this, and both lead me to the conclusion that the wording should be changed. View one: True. All experiments in Physics need better experiments. This is true and it is a fundamental tenet of Science, as in regard to it any experiment, if more accurate, would unveil new phenomena, incosistent with the present theories (it is Popper's principle). In this sense, this note of caution should be used along every single experiment that is mentioned all throughout Wikipedia, or elided here. Otherwise, it would seem that this particular experiment is less reliable or has been perfmormed less accurately than all other experiments mentioned in Wikipedia, or that the results are more controversial than others. And this is untrue, simply put.
View two: The main article refers to the fact that some voices have been raised that question the conclusions of the experiments. This is indeed true, but the right word to be used here is a handful of physicists and not many physicists. We are talking about four of five physicists (of minor impact) againts the view of a community of hundreds, who are both (i.e. on either side) familiar with the experiments and their set-ups. The wording that I propose would be more objective, I think. This does not entail in any way that those few physicists are wrong, indeed they may be right. But the new wording would just refelct the objective reality as to how things are in the Physics community. As it is written now, it is as if in fact the majority of physicists are concerned. What a paradox! (maybe it was a lapsus, the intention being "many philosphers", not many physicists)
I would also say that so far they do not appear to be violated should be changed into they have been verified (predictions are verified, and not not violated). Again, I get the impression from the article that for some reason this should be considered a unique experiment in all the realm of Physics and indeed Science (especially of notice is also the wording "so far", again, applicable in principle to every experiment known to mankind but used only here), wherby we expect it to be violated any moment much more than all others. I might add: Ironically, Bell tests are about some inequalities that cannot be violated, were Einstein's view tenable. Indeed, experiments show they are violated, as Bell predicted. That to refer to this fact one should use the wording "prediction not violated" seems very ironic indeed (but also very confusing). To put it explicitly, the article is basically saying that the prediction that certain inequalities are going to be violated (because of quantum reality) has not been violated in experiments. Terrible! --209.150.240.231 04:56, 18 June 2007 (UTC)
[edit] Determinism vs Fatalism
It is acknowledged, in the second paragraph no less, that people generally mistake determinism for fatalism - and vice versa. I think given this popular misconception, a fuller explanation of the differences between causal determinism and fatalism is warranted.
For example, the following from the Fatalism page sums it up succinctly: "Therefore, in determinism, if the past were different, the present and future would differ also. For fatalists, such a question is negligible, since no other present/future/past could exist except [that which] exists now. —Preceding unsigned comment added by 203.116.167.165 (talk) 09:16, 29 November 2007 (UTC)
- I agree that the point needs cleaning up. Also, determinism just involves predictability (say, in a mathematical or theoretical sense). There is no necessity for causality, ie, there is no need for a cause of everything. The fact that a pendulum is here now is not caused by it being there then, it is just following a completely determined path. When two balls collide, one is not the cause of the collision.
- The problem is that you get these religious arguments that something had to cause the universe to exist. It is just deterministic, it doesn't need a cause. The past does not cause the future, but a theoretical complete knowledge of the past in determinism would allow you to predict the future. (Except that the means of predicting it would have to be part of that past!) Mike0001 (talk) 11:07, 22 February 2008 (UTC)