Wikipedia:Reference desk/Archives/Science/2007 September 14

From Wikipedia, the free encyclopedia

Science desk
< September 13 << Aug | September | Oct >> September 15 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents

[edit] September 14

[edit] living forever

If a human brain could be transferred into an entirely mechanical body, could that person live forever? —Preceding unsigned comment added by 128.101.53.182 (talk) 03:04, 14 September 2007 (UTC)

No, cells of the brain would still deteriorate and die. Would you live longer? Maybe. Depends on what you would have died of to begin with. But there are plenty of degenerative neurological conditions that, if they wouldn't kill your mechanically supported brain outright, would certainly turn you into a vegetable, up until death finally caught up to you. Someguy1221 03:15, 14 September 2007 (UTC)
Still deteriorate and die? I question whether deterioration is inevitable in the first place. Vranak 04:27, 14 September 2007 (UTC)
Genetic mutations and cells' dying by accident are biologically and thermodynamically inevitable. Someguy1221 04:32, 14 September 2007 (UTC)
DNA has repair mechanisms though, if I recall correctly. Vranak 04:36, 14 September 2007 (UTC)
Yes, but DNA repair mechanisms inevitably won't correct all mistakes. Indeed, for certain types of repairs, they have as high as a 50% chance of getting the fix wrong. DNA repairs can even cause mutations that weren't there in the first place. Someguy1221 04:43, 14 September 2007 (UTC)
Sounds a bit specious and pessimistic to me. Vranak 05:00, 14 September 2007 (UTC)
I'm not sure which emoticon to respond to this with. What do you think?  :-D or :-p Someguy1221 05:02, 14 September 2007 (UTC)

Something in this discussion feels to me more like

"But it could be...
No, really not..."
"But it could...
Honestly, no..."
"But it could......

The simple answer is, nobody has put a human brain into a mechanical system, independent of the body. We don't know enough therefore about what such a change would do. We know for example that the brain is not an isolated unit, that much of its processes including neurotransmitters once believed to be exclusive to the brain, are in fact strongly tied into bodily processes. We know the brain is not just an isolated organ, that has a blood supply and nerve connections and that's all. We know that the human mind is not just the same as the brain, so we cannot even assume that because a brain is physically okay, the experience will be subjectively handled without trauma or damage (consider phantom limb issues and psychological mental health concerns for minor examples of mind != brain). And we know that it cannot self-maintain without error, indefinitely.

Undoubtedly a person who could not get bowel cancer, liver failure, heart disease, lung cancer, due to not having these organs, would not be prone to the risk of death from them. So they probably would live longer. But for reasons above, we don't have enough information to estimate what their life (or fate) would be longer term, and it's likely that there would be subjective (to the person) or objective (biochemical) issues not anticipated if an isolated brain were ever created.

FT2 (Talk | email) 12:08, 14 September 2007 (UTC)

A different spin on this -- if the brain was understood well enough that one could emulate the physical functions of the brain in a computer, would the "mind" live forever? Even if you've barred any type of biological breakdown that isn't inherent in the structures of a brain, would the emulated "you" live forever? Maybe if you really underclocked the emulation hardware, making it seem that 10,000 years is a second? Eventually, the heat death of the universe would end your existance. So, no, you can't live forever. -- JSBillings 13:09, 14 September 2007 (UTC)


I'm not so pessimistic. I think this can (and will) be done. Assuming the continued growth in computing power according to Moores law, I calculated that the computing resources to simulate all of the neurons in the brain (along with simulating all of the ancilliary chemistry and some set of sensory inputs) in 'realtime' will be with us in perhaps 30 years - for a machine costing under a million dollars. We can do it sooner - but the cost of the hardware doubles for every 18 months earlier and halves for every 18 months later. You could also do it sooner/cheaper by accepting a computerised brain that ran slower than realtime (so time would seem to speed up for the person being simulated - and for the people watching the simulation, the person would be thinking and responding very slowly). Simulating neurons is fairly easy, mathematically, and we use small artificial neural networks for all sorts of day-to-day purposes already. I don't think this will be at all difficult in 20 years from now.
The tricky part would be transferring the human mind into the machine. Scanning every single neuronal connection in a living brain seems very, very hard to me. Not impossible - but very tough. Taking a terminally ill person - scanning their brain into a computer - then (when they are dead) turning the computer on to allow their personality, memories and intellect to continue would be a nice trick - but technologoically, it would be exceedingly difficult and any slight mismatch between the simulation and the real brain in terms of simulated chemistry or neural responses could be devastating to the simulated mind. So I've been wondering what other possibilities there are.
Perhaps a more likely approach is to connect up some large number of artificial neurons to a living brain and let the brain start to use them as if they were biological neurons instead. This kind of thing is already being experimented with on a very small scale - and the work is promising. If that could be made to work on a very large scale, it might be expected that the person would just naturally start to use the computer for storing and organising memories just as they do with their biological neurons. As more of the electronic neurons start being used, the person would no longer be able to function without the electronics - so this is rather a point of no return! It would then be necessary to very gradually shut down parts of the biological brain and use the well known "plasticity" of the brain (which is well known to allow it to recover function from injury by shunting tasks around internally) to force the mind to nudge itself over into the electronic section of it's structure as the biological part is shut down (perhaps surgically - perhaps radiologically). Eventually, there would be no biological brain left - with all of the functionality being carried out inside the computer. At that point, you disconnect the computer from the body and the person inside cannot easily tell the difference. Obviously there would be some drastic steps involved - we would have to offer the growing electronic brain access to sensors - cameras, touch, hearing, etc - and as those senses start to function, remove the biological senses one by one. Doing it this way avoids the need to 'scan' anything and makes it a gradual process. However, I could imagine this taking years to do because the brain plasticity thing take a while to recover. Since it's unlikely that we would find it morally acceptable to start doing this to someone who is going to live for many years to come, it's hard to see how this could come about in society...although I'm sure the engineering will be quite well understood by the time the computing technology gets to where it needs to be. But with the increasing use of computers throughout society - and a large fraction of some people's "world" being on-line, perhaps over the next 20 years, this won't seem such a bad thing.
The consequences of this once the mind is inside a computer with electronic sensors (and robotic 'telepresence' bodies maintaining any physical being we might wish to have) would be profound. We could live for as long as we desired - a periodic 'backup' of the state of the electronics would allow us to survive any kind of catastrophy. As technology improves, we could add new brain capacity and speed up our thoughts as computers get faster. Speeding up thought is just like slowing down time though - and that's not very exciting to me. What's more interesting is deliberately slowing down brain function to allow you to 'fast forward' reality. If you are stuck waiting for something to happen - then slow the clock down on your computer and time will pass more quickly for you. If you want to jump forward in time, your mind can be dumped onto a disk drive for a few years and turned back on when you get where you want to be. Sending people to other solar systems is not a problem because you can slow down their clocks so that they think they are travelling at any speed you like (you can make it seem like your travelling faster than the speed of light - although in reality that's impossible). Other weirder things can happen: You could start up a second computer, running the same software as yours and suddenly there are two of you. This is like cloning happens in science fiction movies where the clone is the same age and has all of the memories of the original!
Predicting the future more than maybe 5 years away is tricky - but this scenario is quite compelling and there are no obvious technological barriers to making it happen. What is not clear is how a human mind - devoid of bodily aging and chemical failings - will stand up to thousands of years of memories accumulating in a much larger neural network. It's possible that the architecture of our minds is not suitable for that kind of 'scaling up' and we'd simply run out of places to store things - or connections between ideas would simply become too entangled resulting in who-knows-what mental problems. So immortality (in terms of greatly lengthening the duration of our experience of life) might still not be achievable...although living for tens of thousands of years by 'fast forwarding' through it ought to be quite possible. We could also be expected to retain 'backups' of people while they are 'in their prime' and saving them for future generations - so long after someone 'died' of mental complications, we could still boot up the backup copy and have them live the latter parts of their lives again. They would of course be unaware of their other lives - except via history books and whatever other records their 'other' selves might leave behind. The world, run like this, would be a very confusing place - but I'm sure we could learn to deal with it.
SteveBaker 13:24, 14 September 2007 (UTC)
Which assumes that all that makes up a person, is modellable by appropriate biochemical models, and that given ssufficient knowledge of the physical organ of the brain, and sufficiently powerful parallel computing resources, this can be modelled. Perhaps it can - or perhaps it can to within detectable limits. But it's far from clear whether that's guaranteed, or whether transferance of a dynamic system can be moved in that way. Let's say that it's beyond todays science - but todays magic is often tomorrows science too. We just don't know, so "what I think" and assume is not much use in getting an answer. The simplest answer is probably, "yes, something like that is possible, but we don't yet have a good idea what the result will be if we try, because everything major involved is way beyond present day research and solid tested knowledge". That's what makes science fiction so much fun, you get to play with myriad "what I think"s. FT2 (Talk | email) 14:40, 14 September 2007 (UTC)
Most of what has been said here is completely speculative and, though cited with modern theory, is no clear reflection of what we might expect in the future and when we might expect it. I'm sure amazing technological advances in health care are to come, but even if you assume that your brain will one day be completely mechanical and last forever, that still doesn't answer the question. There is still disagreement among philosophers about Personal identity (philosophy). What makes you the same person over time? Also, what about the basic definition of biological life? If you don't consider your desktop to be alive right now, it is likely that you will never be able to call a completely mechanical human "alive."
In my opinion, no matter how powerful computers become, the basis of their operation will be rule based. The basis of biological function is rooted in Brownian Motion. As far as I can see, these are at odds. To echo F2's point above, emulating the brain is not just a function of computing power; It also has to be possible.
Mrdeath5493 16:21, 14 September 2007 (UTC)
Whilst there may not be agreement amongst philosphers - I don't really care because they are not doing the work. I care about what biologists have to say about the complexity of the simulation I'd need - about the effects of chemicals in the brain, about the structure and behavior of neurons. I care about the amount of computing hardware that might be required and the speed it'll have to operate at. I would need to come up with a way to get the 'software' out of the skull and into the circuits. Philosophers are welcome to sit off to the side and debate in the abstract - but they don't have a clue about how all of this stuff works...they are about as qualified to answer the question as geography professors. I think most scientists are pretty clear on the fact that the brain is just a big biochemical machine - and I don't see why (in principle) it can't be emulated in software (or perhaps on custom hardware) sufficiently accurately to do the job. The entire question is "How do you do the transfer?" and "How soon (if at all) will we be able to build computers that complex?". I don't know why you think brownian motion has anything to do with how we think - I've never seen science to that effect - and even if there were, it's only input would be some temperature-dependent randomness - which is trivially easy to add into our simulation. Sure computers are rule-based - but all signs are that so are brains. There is zero debate about whether we can simulate a neural network in software - I can do that - I have done that. Randomness is not difficult to add. The idea that there is something mystical about brains is not something that science is in any way indicating. They are horrifically complicated - and 'emergent behavior' in complex systems is a given. But we aren't trying to simulate higher brain functions here - that would be tricky - probably impossible. We're merely saying that you can replace the low level 'meatware' that the mind runs on with electronics. If you can understand a brain cell in enough detail, you can reproduce it. SteveBaker 17:31, 14 September 2007 (UTC)

Wikipedia has an article called Mind uploading. Mind uploading is a great topic for science fiction. Science fiction stories about mind uploading are often written by physical scientists who have amazingly little interest in the details of biological brains and how they produce minds. Most details of how brains produce minds remain to be discovered, so speculation about the practicality of copying a mind from a biological brain into another substrate strikes me as premature, if fun, speculation. Current suggestions about how such a mind transfer might be possible are probably doomed to be as silly as early ideas about how to achieve human flight or reach the Moon. I have a question for fans of "living forever". What does it really mean to "live forever"? Biological minds change through time. Is the mind produced by the brain of an 80 year old really the same "person" that was produced by that brain 60 years earlier? I'm not sure that the idea of trying to make a person "live forever" makes any sense at all. If you did produce some kind of artificial mind that could retain continuity of thought and personality over millions or billions of years would that artificial construct be a human mind?. --JWSchmidt 01:15, 15 September 2007 (UTC)

Just a clarification. When I brought philosophy into the subject I wanted introduce the concept that if you completely replaced all of your body with new parts, that you might not be considered the same person. The possibilities of completely mechanical people or brain switching are both widely used as thought experiments in modern journals. There is no consensus on the subject, but a fair number of people argue that if you did replace all of your body with mechanical and computer parts (including the brain) that you could not, by definition, live forever. This has nothing to do with the continuity of memory (or body) and everything to do with the fact that we can not reasonably define any computer as "alive." All of that assumes that you could transfer a brain to a board of course. So, in response to the original question, I'm saying that even if I grant the fact that it is possible to replace everything (including the brain) with a computer or mechanical part, that you could not by definition "live" forever because you would not be "alive."
On top of all that, Brownian Motion has everything to do with every process in the body. If it were not for the random movement of particles in solution obeying the laws of physics, no process in the body would work. People often talk about body processes in terms of purpose (i.e. "The mRNA binds to the ribosome in order to start transcription. A signal recognition particle the binds to the amino acid being produced and takes it to the Endoplasmic Reticulum...") However, there is no intent among molecules. Their behavior is a function of concentration, free energy, and etc. The fact that a whole bunch of these randomly occurring processes are arranged in a way to keep us living is a miracle once you understand how random and seemingly out of control it actually is. I was thinking that computers might have a problem getting random enough to emulate it properly (ya know, the whole Matrix concept). However, complete randomness may be something they do well. I'm just not really sure because all I do is read about the body all day :P.Mrdeath5493 02:55, 15 September 2007 (UTC)
Is there good reason to restrict use of the term alive to biological organisms? "Artificial life" as a generalization of the concept of "life" sometimes seems overly optimistic given that current attempts to create artificial life forms are rather primitive. However, philosophers such as Daniel Dennett argue that there is nothing magical about the life forms that evolved on Earth. Is there anything that can prevent us from making man-made devices with interesting behavior that many people will feel comfortable calling "alive"? You could adopt a definition of "life" that excludes man-made constructs, but many people prefer operational definitions: if a man-made device behaves sufficiently like what we expect of a living thing, why not call it alive? Daniel Dennett has addressed the idea that some essential element of "randomness" might fundamentally underlie those physical systems that people perceive as having a mind (as opposed to mindlessly mechanical systems). For Dennett, these arguments are often in the context of refuting the notion that quantum uncertainty produces free will. However, the same arguments apply to "random" processes like Brownian motion. Living organisms succeed in the presence of "random" processes, but folks like Dennett suspect that you can replace biological processes that involves "random" molecular events with functionally equivalent process that are as rigidly precise as a digital circuit. According to that view, it is just an unimportant fluke of nature that biological life happens to involve "random" molecular processes: that randomness is not essential to life or mind. Anyone with an interest in how we get away with talking about "body processes in terms of purpose" should take a look at Dennett's writings about the intentional stance. --JWSchmidt 04:13, 15 September 2007 (UTC)
Well it all comes down to the same thing, Philosophy. I don't think anyone can accurately speculate whether or not a brain can be replaced by a circuit board. So, it is a question of the definition of life. Dennett is just the tip of the iceberg. There are many competing and more widely accepted accounts than his. I'm sure we could get away with using intentions to describe cellular processes if we didn't want to cure disease. Its just more complicated than that; and it does actually matter what is happening. As for the definition of life, there is a general consensus (with a minority objection of course) among philosophers that the basic principles of biology suffice in defining what is alive. Among biologists there is no conflict; I'm pretty sure a computer would fail the "made of cells" part. So there is a very good reason to restrict the term "alive." There is an incredible forum of discussion over this issue in both fields.
Mrdeath5493 04:53, 15 September 2007 (UTC)
Until someone builds man-made devices that pass the Turing test and other more stringent tests for human-like behavior there is plenty of room here for philosophical speculation about mind uploading. There is room for philosophical disputes arising from different intuitions about the future course of neuroscience research and the ultimate impact of that research on philosophy of mind. To what extent is it just wishful thinking to assume that accurate speculation about mind uploading is possible without paying close attention to the details of biological brains and how they produce minds? I guess it would be "good news" for philosophers if it turns out that they can reach useful conclusions about mind uploading without being bothered to learn pesky details about brains, but I think the history of philosophy contains many warnings that should make philosophers nervous about ignoring scientific details. If you define "star" as a point of light in the night sky attached to the celestial sphere then you might find it all too easy to conclude that by definition "a massive, luminous ball of plasma" is not a star. For biologists, traditional definitions of "life" that were created to ONLY deal with life as it has evolved on Earth can be viewed as definitions of "Earth's biological life". I do not see a barrier to adopting a broader definition of life that includes non-biological life forms or any new forms of life that exobiologists might discover. It is true that many philosophers are not happy with Dennett. Dennett takes seriously the need to be well-informed about scientific results when philosophizing. Many philosophers prefer to believe that they work on a logical plane that is detached from the nitty gritty of physical reality. --JWSchmidt 17:15, 15 September 2007 (UTC)
Steve, there not being enough space to store all info is no issue. That's actually how the mind works - it drops whatever is less essential. It uses this efficiency to fit a model of reality in the limited space of the brain. The bigger the brain, the more it can store, but it can never store all of reality, so it still needs to drop info.
You've got the right idea, though (well, at least in part of your discourse). Transferring the mind 'as is' to a computer won't work. It wouldn't even work with another brain because the wiring is wrong. You'd need an exact copy and that won't work until we have a replicator or teleportation or something similar. Instead, the mind should be linked to a computer that works with the same sort of principles, also creating a model of the outside world through interaction with it. If done right, the two will start to interact with each other and slowly merge until after a while they're really just one mind. So when the biological part (the wetware) dies, the (by now much larger) hardware-ego would barely notice it, more like a very minor stroke (the bigger/faster the hardware the more minor it would be). There's no need to actively shut down the biological parts (which would also raise legal issues). Just let the wetware and the hardware merge and then let nature take its course with the biological part. Also, why make exact copies of the human senses? Just use whatever sensors technology has to offer at the time. The mind will adapt. It's specifically good at that (the learning process).
An important angle is how someone would perceive the change. They would indeed not be the same person, as several people pointed out already. But as MrDeath said, that's normal. I am definitely not the same person I was, say, 20 years ago. Yet I perceive myself to be the same person. Not all memories are intact, but those that are make me feel like the same person (a special case is that I consider myself to be the same person as when I was a baby, even though I have no memories of that). As the biological and computer minds merge, the resulting mind will be different, partly because it uses different sensors and actuators, but if the merger works, then the memories of the wetware will get linked to ever more mental constructs in the hardware and thus effectively partly transfer into the hardware. If the hardware is much bigger, then the ego will also shift to that part of the new brain more and more, until eventually it largely resides there. In the transfer it will change, but because the change is gradual and most memories are transferred, it will still perceive itself as the same person, even after the wetware dies.
An interesting consequence: If more people do this (all of humanity?), then they can (and therefore will) connect through the Internet and merge. Ultimately, there would be just one human being, if one can call it that. This raises the issue of who it thinks it is. All the merged people at the same time? After a while that will feel like a distant memory, just like how I feel about when I was 10 years old. I know I was and I still have occasional vivid memories of it, but even though I consider it to be me, it feels somewhat alien.
Btw, Steve, I consider myself to be a philosopher and I know perfectly well what I'm talking about (I think). :) Actually, focusing too much on the way human intelligence works on the cellular level can restrict your perceptions of how it might work. One needs to be an abstract philosopher to understand this. Which you are (even if you don't know it). You just need to 'let go' a little more. :) DirkvdM 06:47, 15 September 2007 (UTC)
Prefacing a long discourse with "If done right..." is constructive for selling a story idea to a science fiction book publisher, but this is not the science fiction reference desk. Is, "If done right, we could use robotic probes to gather data from every location in the solar system," science fiction or science fact? Collecting data at the center of the sun would present serious practical problems. Similarly, it is not at all clear that we will ever have the technical ability to connect two brains so as to allow the transfer of memories from a human brain into another brain in such a way that we would feel justified in saying that we had successfully transfered a human mind or "person" to a new brain. Digital electronic computers were designed to facilitate the uploading and downloading of data. Biological brains are the result of a billion years of evolutionary design that makes no concessions to the goal of transferring memories or personalities to an external substrate. There are mechanisms by which parts of brains can "mirror" or "reflect" the activity of other brain parts, but it is a large leap of faith to assume that such processes will in practice ever allow the transfer of a "person" from a human brain into a another brain. Focusing too little on the way human intelligence works on the cellular level can allow philosophers to imagine and hold unrealistic views of how easy it would be to upload a human mind to a new brain. --JWSchmidt 18:06, 15 September 2007 (UTC)
Actually, I think you are saying pretty much exactly the same thing as me. The very first bit isn't quite right though. The mind is indeed quite ingenious at fitting 100+ years of memories and algorithms for how to do everything from walk to solving a Rubic's cube into relatively few "bits" of storage. However, as efficient as that is, it's still about a million times more bits than we can currently store in fast memory in a $1,000,000 computer. That's 20 doublings of performance, Moore's law says one doubling every 18 months - so we have to wait 30 more years to get 'brain-sized' computer hardware at affordable costs. The way that the brain stores that information and how it does it is going to be hard to store - but the plan (which you evidently endorse) to allow the brain's natural plasticity to let it migrate into computer hardware seems like it might work. That approach would not require understanding how the brain's software works - we're replacing the hardware. This is a commonplace occurance - we have software that emulates (for example) a Nintendo 64 game console on a PC which has totally different hardware. Once you have that working, you can run N64 games on your PC. The point being that you don't have to understand how the game works in order to do that. Same deal here - we don't need to 'decode' how the mind is working on the wetware of the brain - we simply need to emulate the brain on computer hardware (which I contend is fairly easy with 2000's technology if we only had the horsepower and capacity. The mind operates on the brain by rewiring it...we can't (easily) figure out how the wiring has been done (and it's different for each person and from one day to the next). Hence we can't 'scan' a brain into the computer - so allowing the mind to rewire our computerized brain-simulator and thereby migrate onto it is perhaps the easiest way forwards. I'm pretty sure it can be done - and if it CAN be done, it's pretty clear that sooner or later it WILL be done - because there are rich geeks who would like to live forever (and to fast-forward through the boring bits). SteveBaker 17:31, 15 September 2007 (UTC)
"That approach would not require understanding how the brain's software works - we're replacing the hardware." <-- Can you provide some links to reliable scientific sources that describe what "brain software" means? --JWSchmidt 18:17, 15 September 2007 (UTC)
I don't have to find references - I'm using the term linguistically to distinguish the cells and the chemicals from the higher level processes and emergent behavior. The term is a self-evident analogy with computer software. SteveBaker 20:03, 15 September 2007 (UTC)
A partial restatement of some that has been said (largely by Steve and me) because this can be such a difficult concept to grasp that different wordings can be very helpful:
What is required is an interface between two intelligent systems, in casu the wetware and the hardware. But you don't need to know the details of how each system works. The interface should figure it out itself, based on intelligent info-organising rules (sorry about the bad terminology). There is some early work in this field, with pilots controlling airplanes by thought, although I don't think the way that is done is what is needed here. It's too specific. There's this Finnish professor (forgot the name) who is working on letting neural networks develop their own connections in response to whatever they are connected to in the outside world. Give both the brain and the computer such an interface and they should be able to develop meaningful communication.
Note that the mind is not transferred or uploaded (or downloaded or whatever). One might say it transfers itself, but really what happens is that a new, bigger, mind emerges, existing simultaneously in the wetware and in the hardware. When the wetware dies, the old person really does die, so it is not transferred. But the bigger new ego feels like he is the same person (which just had a minor stroke), which is then potentially immortal (of course, it can still be killed, just not as easily). DirkvdM 09:41, 16 September 2007 (UTC)

[edit] climatology

Are there any computer model/simulations available that illustrate what will happen to global climates if the Oceanic Belt ceased flowing ? (Other than the "Little Ice-Age" during 1400 to 1850 A.D.). Has there been any government research into this possibility ? email removed to prevent spam—Preceding unsigned comment added by 71.213.138.141 (talk) 03:12, 14 September 2007 (UTC)

If you're talking about ocean currents, there are a number of computer models to calculate the effects. This site contains a number of simplistic modelling programs listed under "Data Resources" and suggests there are more advanced (and presumably more accurate) models that are running today on supercomputers (so go become an oceanographer if you want access to one ;-) ). Someguy1221 03:21, 14 September 2007 (UTC)
You probably mean shutdown of thermohaline circulation. Not a very extensive article given the potential importance of the phenomenon (the gigantic consequences and the fact that it might actually happen in the next few decades, even though chances are small), but following the links might lead you to an answer. DirkvdM 07:03, 15 September 2007 (UTC)

[edit] Accurate orrery

Has anyone ever designed a computer-based orrery that is accurate in every respect? I have seen (both on-line and in software) computer models of the solar system, but without exception they always miss on at least one (usually many) aspects. For example, they they may show accurate orbits as viewed from above, but the planets are just solid spheres that don't rotate. Or there are problems with scale. I'm pretty sure with 21st-century technology it is possible to design an orrery that replicates everything on a zoomable — all of the orbital elements, planetary sizes and axial tilts, satellites in their proper sizes and positions (although that last might be too much to ask). Also, time elements such as orbital and rotation periods that are accurate relative to each other. Is there such a program / software / whatever? Or am I wrong in assuming that such is technologically possible? — Michael J 04:10, 14 September 2007 (UTC)

Something like celestia? I believe that's fairly accurate on the solar-system scale. Capuchin 07:39, 14 September 2007 (UTC)
I'd use Celestia; it's accurate for the next few thousand years, and it takes into account elliptical orbits, satellites, quite a few planetoids, asteroids and comets, and not only do planets rotate and orbit accurately (you can even plug in a specific date and time and see how the solar system will look on that date), but the planet surfaces are made from NASA photography and are hence pretty realistic looking (although on planets such as Mercury where photographic coverage is not complete, there are some blank grey areas...) And, it's free! (see links from article) Laïka 08:34, 14 September 2007 (UTC)
And there are a great deal of community add-ons too, some realistic adding space stations and new textures, some adding death stars and things... Capuchin 08:46, 14 September 2007 (UTC)

On a similar note, I was wondering if there is any freeware which can help in identifying the heavenly bodies one might see, based on the viewers current location on earth ? something more accurate than google earth 4.2. Vijeth 10:02, 14 September 2007 (UTC)

There are many many such programs to help with stargazing. I don't know the names of any of them though! Capuchin 10:06, 14 September 2007 (UTC)
Thanks, I will check it out on google. Oh yeah I removed the redundant "heavenly body in the SKY" part. How silly of me. Vijeth 10:19, 14 September 2007 (UTC)
There's Stellarium; that's the one I use. Laïka 11:49, 14 September 2007 (UTC)
Yes. The combination of Google Earth to go virtually to any location, Stellarium to see the sky from that location, and Celestia to travel to anything you see there is amazing. Alfrodull 21:17, 14 September 2007 (UTC)

Aha, Celestia. I'd never heard of it. I'll check it out. Thanks! — Michael J 10:44, 14 September 2007 (UTC)

The nice thing about Celestia (being Open-sourced) is that you can chat with the authors on their mailing list. So if there is a feature that you feel is missing then you can discuss it with them. If you have good ideas that are reasonable things to add to the package that they have not yet come up with, the odds are good that someone will add it into some future release. Of course if you can program a computer you can also do it yourself and have the change become a part of the package into the future. SteveBaker 12:44, 14 September 2007 (UTC)

[edit] Energy level of photon .

YOU SUCK

Is there any energy level for photon . If there then what is the lowest energy that a photon may have ? what is the hieght permissive energy that a photon may have ? —Preceding unsigned comment added by Shamiul (talkcontribs) 05:10, 14 September 2007 (UTC)

No photons may take any energy. The photon is essentially just energy, a 0 energy photon does not exist, but any slight energy above that is possible. Neither is there a maximum theoretically, although there may be a practical maximum based on what produces photons. High energy photons can lose energy through Pair production, though I doubt this provides any upper limit. Cyta 07:36, 14 September 2007 (UTC)
A Photon is a derivative of an electronic orbital's loss of energy, right? If this is true, then a photon only have a limited number of energy levels. In that case, What is the photon's lowest possible energy level? = The lowest energy loss that an electron orbital can make. Right? And because electronic orbital gaps are known, and we should be able to say what the lowest energy level of a photon is. It would be a chemical question in this case. That said, we do have to go on and say that this answer is incomplete. Because 1) photon energy can be lowered or raised depending on the relative velocity between the photon emitter and the photon receiver. Fex, the relative velocity is high and divergent? So the photon appears to have less energy. 2) In metals, a sea of electrons is formed by the nature of covalent bonding. And in this sea of electrons, minute electromagnetic energy changes happen in response to electric currents. This sea of electron forms a surface holding electromagnetic tension. Photons are emitted as the tension subsides. So in this case, What is the photon's lowest possible energy level? = depends on the electric current driving electrons through the metal conductor. InverseSubstance 19:18, 15 September 2007 (UTC)
In other words, you can generate photons anywhere along the electromagnetic spectrum. - 66.245.217.182 09:32, 16 September 2007 (UTC)

[edit] Snowy Tree Crickets.

Does anybody know where snowy tree crickets, which obey Dolbear's Law, can be found? We have a stub on tree crickets, but they don't seem very snowy to me, and it doesnt indicate where they can be found! Capuchin 09:19, 14 September 2007 (UTC)

Ah, found this page which suggests they are from all over North America. It also suggests that the chirp rate is faster on the west cost than the east, while still being proportional to temperature. Maybe some kind of evolutionary change? Not sure I can quite see the driving force for it. (not that I can see the driving force for it being proportional to temperature in the first place, it looks like it's probably a side-effect of some other change). Capuchin 09:43, 14 September 2007 (UTC)
Oh, it suggests that they're faster in the west to distinguish between other crickets. Can anyone think of a reason why it would be temperature dependant in the first place? Capuchin 09:51, 14 September 2007 (UTC)
Speculation, but I always presumed that they are reliant on the resonance of some structures in their legs whose stiffness depends on temperature - or possibly that the sound carries better at some frequencies than others and that is dependent on air temperature. The east/west coast thing is almost certainly some structural difference in the two populations brought about by some evolutionary pressure or other. SteveBaker 12:40, 14 September 2007 (UTC)

[edit] science

how many time takes the light to reach the earth from sun? —Preceding unsigned comment added by 59.95.10.87 (talk) 11:18, 14 September 2007 (UTC)

For the speed of light, see Speed of light. For the distance from the Earth to the Sun, see Earth. For a formula to convert these figures into a lenght of time, see Speed. Capuchin 11:29, 14 September 2007 (UTC)
Or alternatively: click here. Unless you want to work it out yourself with the information I gave above. Capuchin 11:57, 14 September 2007 (UTC)
The answer is: about eight minutes. --Taraborn 12:22, 14 September 2007 (UTC)
Or a bit more if it is light reflected from the Moon. And for the nearest star it is just over four years and for the most distant star it should be about 14 billion years (when we get to see it and assuming the Big Bang theory is correct). DirkvdM 07:15, 15 September 2007 (UTC)

[edit] Type II supernovae

Request for help from anyone with access to definitive and current resources.

Sources in the Supernova article vary; some say that specific burns take a day, some that they take 2 weeks. These sources are all credible - academic papers, online university lecture notes.

Can someone do some research and figure out what the current scientific consensus is on this? Thanks!

See Talk:Supernova#Type_II_timing_contradiction.3F

FT2 (Talk | email) 11:58, 14 September 2007 (UTC)

[edit] iguanas

How fast does an iguanas heart beat?Hapshepsut 16:07, 14 September 2007 (UTC)

The article "The relationship between heart rate and rate of oxygen consumption in Galapagos marine iguanas (Amblyrhynchus cristatus) at two different temperatures" (catchy title!) in The Journal of Experimental Biology says it's between 32 and 106 beats per minute depending on how warm the creature is (remember they are cold-blooded) and on whether they are resting or exercising. SteveBaker 17:07, 14 September 2007 (UTC)

[edit] US PATENT

How can I find if a specific US patent number is in force and that the renewal annuties or maintenance fees have been paid for that patent? —Preceding unsigned comment added by 69.39.135.6 (talk) 16:13, 14 September 2007 (UTC)

There are lots of online patent search services - I used to like the one IBM ran - but it's been taken over by Delphion and it's not free anymore. But you can also use the US Patent Office search or even Google Patents - those should say when the patent expires. SteveBaker 17:02, 14 September 2007 (UTC)
Google Patents is excellent. It does not say when a patent expires, though. It will tell you when it was issued. From there you can figure out if it is more than any possible renewal can extend (14+ years). As for more than that, I would check with USPTO, I am sure they have information on that somewhere. --24.147.86.187 00:52, 15 September 2007 (UTC)
Try the USPTO's Public PAIR at [[1]]. It doesn't directly answer the question whether a patent is in force, but it does provide such information as filing date, issue date, patent term adjustment, and maintenance fee payments. --Anonymous 14:19, 15 September 2007 (UTC) —Preceding unsigned comment added by 71.175.68.224 (talk)

[edit] Why do cells divide?

Why do cells in the human body divide? After the sperm fertilized the egg, and when we were still an unicellular organism, why doesnt that one cell (us) just get bigger, instead of dividing? Is it because the larger the cell, the less nutrients will reach the core of the cell? Thanks. Acceptable 23:56, 14 September 2007 (UTC)

See Surface area to volume ratio. Basically, the volume increases by the cube (x3) while surface area increases by the square (x2). Generally, the smaller the cell is, the faster things diffuse. If the cell was to simply grow larger, eventually it would grow so large that a) its volume would be too big for the uterus and b) diffusion would be too slow to sustain the cell. There is a certain point where cells are at an optimum surface area to volume ratio. After that, they must divide. bibliomaniac15 15 years of trouble and general madness 00:14, 15 September 2007 (UTC)
There are special conditions under which individual cells can be large such as when producing egg cells, but most large cells form a multi-nucleated syncytium. In metabolically active cells, there are probably constraints on size due to several effects including the limits due to diffusion and the limits imposed by the number of genes in a chromosome set and the needed level of production of RNA. Also, when forming most structurally large tissues there are probably advantages to having either small replaceable components or small information processing units (nervous system). A few cells get around the RNA production bottle neck by generating special amplified chromosome sets. A great advantage of multicellular life comes from cellular differentiation which is facilitated by the chromosomes inside nearby cells being in isolated compartments where different sets of transcription factors and other regulators of gene expression can specialize, resulting in the production of specialized cell types. --JWSchmidt 00:35, 15 September 2007 (UTC)
To steal another poster's very cool answer from a few days back, take a look at caulerpa. It looks like a full-sized plant, and it's all one giant cell! Wacky! --Reuben 00:51, 15 September 2007 (UTC)