Wikipedia:Reference desk/Archives/Science/2007 November 16
From Wikipedia, the free encyclopedia
Science desk | ||
---|---|---|
< November 15 | << Oct | November | Dec >> | November 17 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
[edit] November 16
[edit] Theory of Everything Solved?
There's this beautiful pattern called an e8 polytope discovered a century ago that is "the highest finite semiregular figure" known to exist. Well, some surfer dude in Hawaii apparently resolved all 4 forces of nature into a matrix of an E8 design. In a 3-sentence paper entitled "An exceptionally simple Theory of Everything," he states:
"All fields of the standard model and gravity are unified as an E8 principal bundle connection. A non-compact real form of the E8 Lie algebra has G2 and F4 subalgebras which break down to strong su(3), electroweak su(2) x u(1), gravitational so(3,1), the frame-Higgs, and three generations of fermions related by triality. The interactions and dynamics of these 1-form and Grassmann valued parts of an E8 superconnection are described by the curvature and action over a four dimensional base manifold."
From the article I read, these 3 sentences beat out string theory hands down as a workable TOE. I'd like to know what the science desk thinks about surfer-dude as the next Einstein. Sappysap 00:52, 16 November 2007 (UTC)
- If it is indeed right it has alot of implications. For example there could not possible by any more smaller particles which would put an end to debate about particles being brocken down into smaller ones for ever and ever ...--Dacium 01:05, 16 November 2007 (UTC)
- There is really no shortage of theories - but they have to predict something that we can test experimentally or it's just messing around with math. String theory basically describes the universe fairly well - but it's not testable. Remember - relativity wasn't initially taken at all seriously. It's initial publication made scarcely a ripple in the world of science. Fortunately, it predicted the amount that the light from a distant star would be bent as it passed close to the sun - and an experiment (during a solar eclipse) several years later showed that the amount of deflection was exactly what the theory predicted. Within mere weeks of the experimental results being made public, Einstein and his theory were transformed from almost complete unknowns into superstardom. Einsteins genius lay partly in the fact that even as a purely theoretical physicist, he was none the less able to end each of his papers with suggestions for experiments that could be performed to test them. Without testability, you don't have falsifiability - and a theory without falsifiability is useless to us. That's why string theory is in so much trouble. But it's early days yet - lets see where this goes. SteveBaker 01:10, 16 November 2007 (UTC)
-
- Steve, you're a great all-around science guy, but you're not a great historian! Relativity did make quite a ripple even before 1919 amongst the European (specifically) German physics community (and if you think that is limited, remember that at the time that was the theoretical community of preeminence in such things), and Einstein, despite his quasi-outsider status, was taken up as being extremely significant by none other than Max Planck (who is probably responsible for relativity having been taken seriously at all at the time). Now 1919 did change things, but don't let its overwhelming popular attention make you think it hadn't previously garnished specialist attention! (And yes, Einstein's primary physics work was "purely theoretical", but in his life he spent quite a lot of time dealing with the practical and the empirical. He wasn't a patent clerk specializing in electrical engineering for nothing! And it's not a coincidence that synchronization of clocks by electrical signals plays a major role as a "thought experiment" in his early papers—it was one of the hottest practical technical problems of the day, and dozens of patents on the subject passed his desk while he worked in Bern.) --24.147.86.187 04:34, 16 November 2007 (UTC)
- Wikipedia's article on the theory is An Exceptionally Simple Theory of Everything. The theory predicts 20 new particles which have not been observed, so if the particles actually wind up being observed, that would be a major confirmation of the theory. Of course, if the particles are never observed after much effort, then the theory won't look so hot. So the theory is essentially falsifiable, which is a very good thing. MrRedact 01:21, 16 November 2007 (UTC)
-
-
-
- Both string theory and this simple theory claim that something nearly magical will happen when the Large Hadron Collider goes into operation. The hope is that the tests in the collider will somehow prove or disprove some of these theories. Luckily, my theory that there is no such thing as gravity is in no way dependent on any tests in the collider. -- kainaw™ 02:20, 16 November 2007 (UTC)
-
- It's a 30 page paper [1], you have only reproduced the 3 sentence abstract. Also, he has the benefit of a PhD in particle physics, even if he hasn't been using it lately, which goes against his Einstein-like talented outsider cred. More significantly than what is mentioned above, this theory should allow the masses of all particles to be calculated from first principles (provided you can handle the 200-dimensional complex calculus of variations which the present paper hasn't even attempted it). It is a highly non-trivial calculation, but it should be possible to determine directly whether this theory is consistent with the families of particles we observe. (The catch is that there may be alternative ways of embedding physics in E8 that would also provide a theory of everything but allow masses to be tuned.) Dragons flight 09:14, 16 November 2007 (UTC)
-
- Einstein was not an outsider. He got an undergraduate physics degree in 1900 and completed a PhD in 1905. "Patent clerk" sounds like unskilled clerical work, but in fact he was a patent examiner, i.e. he evaluated the technical merit of patent applications.
-
- As for the paper, the only sensible thing to do (for those of us unqualified to evaluate it) is to wait and see what the community makes of it. Lee Smolin's endorsement does carry some weight. I'm a bit suspicious of the paper because it's exactly what most physicists would expect a unified theory to look like. People have been looking for exactly this sort of thing for decades; they've even been paying special attention to E8 at least as far back as 1984. This isn't like special relativity, where a conceptual breakthrough was needed; if this is right, then people have known what to look for and where to look for it for decades, and I have to wonder why it took so long to find. But I hope it's right, of course. -- BenRG 15:30, 16 November 2007 (UTC)
- The real difference between this and special relativity is that the world wasn't really looking for a theory - they had the Aether and they were quite happy with it...except for the annoying little detail that nobody had been able to find the stuff, even with quite some rather clever experiments. But they weren't generally looking for a new theory - just a clever experiment to prove what they already thought they knew. Einstein had to come up with something from cold, hard thinking. That was incredible. His theory explained why existing UNEXPLAINED experiments came out the way they did - and also made testable predictions.
- But this is a lot different. In this case, most physicists firmly believe that there must be a grand unified field theory - we just haven't found it yet. But the universe does seem to work OK without one. Experiments seem to come out the way we expect them to - although it would be nice to fix up the rough spots between relativity and quantum mechanics. It's really just a messy leftover that there isn't just one field that explains the whole thing. So LOTS of people have been looking around for a theory that might fit (and there are many theories out there that sound reasonable). But there isn't really an experimental flaw in our separate field theories that this "fixes" - it doesn't explain any anomalous results we're getting. So we won't know whether this is a breakthrough or not until we have a clear set of predictions and a do-able experiment to test them. Still, the prediction of 20 new particles - some at least of which we should be able to see in equipment we'll someday have running - now that's something you can take to the bank!
- -- SteveBaker (talk) 22:00, 16 November 2007 (UTC)
-
- Well, the aether theory was the theory of everything of its day! It was not conservative, it was revolutionary! They had unified all physics through the idea of an electromagnetic aether! They were still looking to figure out exactly how it worked, but it was the TOE of its day! And not finding the stuff was not that huge a deal for most of the physicists; Lorentz came up with some very snappy equations to understand why Michelson-Morley would be null, and those equations turned out to be useful and true even without the aether! The Lorentz contraction was born out of work to preserve the aether, not relativity! Einstein's theory didn't explain Michelson-Morley in a rigorous way—he just thought the idea of an aether was superfluous, but SR doesn't explain why there can't be an aether. --24.147.86.187 (talk) 00:48, 17 November 2007 (UTC)
-
-
- Special relativity says that there is no preferred frame of reference. Since an Aether would have it's own frame of reference that would certainly be pretty much 'preferred', relativity blew away the idea of the Aether. In fact, later, after General Relativity was dealt with, Einstein worried about the "Newton's bucket" thought experiment (does the water in a rotating bucket push up the sides if it is the only thing in the Universe). He alternated between the view that the presence of all the other matter in the universe would be the only thing to let the bucket know that it was rotating...and the view that in fact he might be prepared to admit the existence of a different kind of Aether that would provide an absolute frame of reference for rotating objects. SteveBaker (talk) 17:29, 17 November 2007 (UTC)
-
[edit] Vanadium
What would be the effects of injecting Vanadium into your bloodstream? --24.58.159.152 02:09, 16 November 2007 (UTC)
- See Vanadium for currently known toxicity factors. As far as I know, there are no published studies on the toxicity of injecting it directly into the bloodstream of humans. Even with lab rats, the concern is contamination through ingesting and inhaling vanadium compounds and ions. -- kainaw™ 02:14, 16 November 2007 (UTC)
- Vanadium metal is solid at room temperature, so you can't really inject it. The effect of injecting some compound of vanadium would depend on the specific compound. —Keenan Pepper 23:43, 16 November 2007 (UTC)
[edit] Burning track in the sky
I've just seen the trail of a burning object in the sky. I'm pretty sure that it was slow, close and scattered enough to exclude the possibility of being a shooting star. What was it? An old satellite entering the atmosphere? Along with that amazing vision I also saw a Renault Clio that my father sold 6 years ago 30 miles away from my hometown here in Portugal, it was a night full of weird happenings. I'm a bit drunk, but I swear that I've seen that. I also swear that I'm not on drugs. Cheers! I hope you help me with the burning object thing. 217.129.241.186 02:51, 16 November 2007 (UTC) (Portugal)
- Meteors can travel pretty slowly - distances are hard to judge when you're looking straight up - meteors do break up and scatter. I think you were lucky enough to see a pretty spectacular meteor. "Shooting star" is a little unscientific for the science desk! The correct terminology for these things is messy. When you see them a long way outside the earth's atmosphere they are "Asteroids", when they enter the atmosphere, they are "Meteors" and if they hit the ground - they are "Meteorites"...all exactly the same thing though. SteveBaker 04:16, 16 November 2007 (UTC)
-
- I thought they were called meteoroids before they got here. --Tardis (talk) 20:13, 16 November 2007 (UTC)
- Perhap you saw Comet 17P/Holmes? I can't help with the Renault thing ... --LarryMac | Talk 17:04, 16 November 2007 (UTC)
- Thanks! I don't really need help with the Renault thing. It was just an amazing coincidence. About the comet, I don't think it was a comet, it was more like a burning thing that lasted for 3 or 4 seconds. 84.90.44.231 (talk) 11:04, 17 November 2007 (UTC)
- Hi. Was it by any chance an iridium flare? They look like a very bright star, lasting for a few seconds, moving slowly accross the sky. I've seen a few of those before. Inexperienced people may mistake one for a bright meteor, but I've seen a meteor before so I can tell the diffence. Use this website to help you find iridium flares. Some iridium flares can, in rare cases, be as bright as a cresent moon, but look much smaller. Perhaps that was what you saw? Hope this helps. Thanks. ~AH1(TCU) 15:20, 17 November 2007 (UTC)
- Thanks, but it was really burning. Like something disintegrating and burning during its flight.
- Thanks! I don't really need help with the Renault thing. It was just an amazing coincidence. About the comet, I don't think it was a comet, it was more like a burning thing that lasted for 3 or 4 seconds. 84.90.44.231 (talk) 11:04, 17 November 2007 (UTC)
- Arenn't the Leonids slow-moving? Delmlsfan (talk) 01:32, 18 November 2007 (UTC)
[edit] Can you tell if a movie was made in colour if you watch it on a b/w TV set?
I often hear people say, of black and white movies that have been colorized and then presented on TV: “If you don’t like the idea, then just turn down the colour”. When I was a kid, my family owned a b/w TV, and I always thought that I could tell the difference between a b/w movie and one that had been made in colour, even when I had not seen the movies earlier. The colour movie/TV series looked fuzzy and unclear on a b/w set, and the more saturated the colour was, the fuzzier and murkier it looked. Otoh, the images from b/w movies looked sharp, pure and distinct on the same TV. As this is the Science Desk, I will suggest a line of research. Take an original print of some b/w classic like “Casablanca” and the recent colourized version of said film. Now select a scene from the b/w version and show it on a colour TV set. On another IDENTICAL TV set, show the same scene from the colourized version, but with the colour turned off. Now, do these images look the same? Could an observer who doesn’t know which is which tell one from the other? Has anything like this ever been done, and does anybody else think they can tell the difference between b/w and colourized b/w with the colour turned down. I reckon I can, and I will not watch such shows for that reason. Myles325a 03:07, 16 November 2007 (UTC)
- Experiments? Here on the science desk? Heresy!
- But seriously. Yes, you can kinda tell. The transmission format for colour TV was intended to be backwards compatible with black and white TV - but it wasn't 100% perfectly correct. The easiest way to see this is in a set of colour bars - the ones that go through all 8 fully saturated colours (Red, Green, Blue, Cyan, Magenta, Yellow, Black, White). In most versions of that pattern, the Green and Magenta bars are adjacent. If you look closely, you'll see a line down between the two...which is utterly not there in the original colour signal - it's an artifact of the way colour is encoded.
-
- No this is an artifact of the color electron guns and pixels spots not lining up perfectly in your TV. Nothing to do with the signal. On a digital display you won't see this at all.--Dacium 05:42, 16 November 2007 (UTC)
- (For the technically minded: Colour is encoded as two colour difference signals called 'Y' and 'U' - which is encoded in the phase of the colour carrier signal - which is too high in frequency for a black and white TV to display. However, when the phase abruptly changes by 180 degrees, the instantaneous frequency briefly halves and then the black and white ('V') part of the signal catches a glimpse of the colour carrier and displays either a dark or a light line. A US-style NTSC TV shows either a dark or a light vertical line between some of the colour bars. A UK-style PAL TV shows this as a wobbly vertical line because PAL (which stands for Phase Alternate Line encoding or thereabouts) alternates the phase of the encoding on alternate lines - which accounts for the better colour quality on PAL TV's than NTSC ("Never Twice Same Colour"!).
- So, to cut to the chase, the deal is that while a pure colour doesn't mess up the monochrome signal, an abrupt change in colour can. So, in theory, with the right kind of picture (such as a test-card), you could tell the difference between a true monochrome TV signal and a colour TV signal - even on a black and white TV. I guess turning down the colour saturation on a colour signal would produce a similar result.
- SteveBaker 04:34, 16 November 2007 (UTC)
-
- You are saying that a change in colour will cause a change in lumanance? Color is sent at completely independent point in time during what is called the color burst. It is a burst of high freqency data that black and white TV's do not expect and cannot read. After the colour burst comes the luminance for the line, which comes at a much slower pace than the colour bust. Which is why TV's are backwards compatible. If you take a colour TV and ignore the color (ie. the colour burst) the picture is EXACTLY the same as a black and white TV. This has nothing to do with what the original poster was asking however. Once a movie is re-made into colour, the luminance values will change compared to the original luminance values of the black and white movie. This makes it look different. Some movies keep the original lumanice in-tact so you cannot tell the difference. If you plug a test card into a B&W TV and I plug one into a colour TV and turn the colour burst off, you will not see any difference 'due to abrupt change in colour'--Dacium 05:34, 16 November 2007 (UTC)
-
-
- You're completely wrong about the colour burst. All that does is to provide a zero degree phase sample of the colour subcarrier. The actual colour data is transmitted along with the monochrome video - but modulated with a carrier wave that's too high in frequency for monochrome TV's to detect. The remainder of your reply (being predicated on that wrong assumption) is also wrong. Remember that electronics were pretty primitive when colour TV's first appeared and there is no way that they could have delayed the decoded colour information from some packed-up form at the start of the scanline to resynch it with the video. At any rate, if you do as I suggest and take a close look at some colour bars on NTSC or PAL, you'll see the artifact I describe and you will thereby be convinced! SteveBaker (talk) 17:15, 17 November 2007 (UTC)
-
-
- SteveBaker, from what I understand, NTSC color TV mainly uses YIQ, instead of YUV, with the "I" and "Q" in the color subcarrier, and the "Y" (luminance) is the only channel used by black and white TVs. Also, Myles325a, regarding the original research suggestion, that wouldn't work. In theory, a black and white program should only have changes in Y, with I and Q being neutral. However, colorized images will have a different Y than than un-colorized images, since the Y value affects and is affected by the I and Q values. You would see a difference between a black and white version and a colorized version of the same video because the luminance would be different. Now, a simpler explanation for the difference you saw in color vs. black and white video on your TV could be due to a problem in your TV where some of the I and/or Q signal slopped over into the Y signal, which wouldn't happen with black and white video, since there effectively isn't an I or Q signal. (Note: This is somewhat guesswork on my part. I have some knowledge of colorspaces, but I'm no expert on electronics.) -- HiEv 16:42, 16 November 2007 (UTC)
-
- YIQ and YUV are substantially the same thing - they are both colour difference systems - but with slightly different formulations. You're right though - it's the 'Y' that's the luminance - not the 'V' - my bad! I was getting confused with the HSV colour system where 'V' is luminance. SteveBaker (talk) 17:15, 17 November 2007 (UTC)
- Snooker on an old set? And for those of you watching in black and white, the pink is next to the green. Lanfear's Bane | t 11:10, 16 November 2007 (UTC)
-
- In my experience, yes, one can tell the difference between a colour and a black & white film when watching on a b&w set. As the OP said, contrast is higher on a b&w original. I suspect partly that this is because of differences in lighting for filming in b&w and filming in colour - when filming in b&w the cameraman must "think" in terms of contrast, when filming in colour he "thinks" in terms of colour, and adjusts the lighting accordingly. DuncanHill 11:26, 16 November 2007 (UTC)
[edit] Solar Energy Radiation
How much energy does the Sun radiate per second? I looked on the Sun page, a few other likely pages, and Google and couldn't find any answer. There does seem to be some info on how much energy Earth picks up from the Sun, but this represents only a fraction of its output. Thanks! --pie4all88 03:30, 16 November 2007 (UTC)
- It's 3.846x1026 Joules per second, also known as 3.846x1026 Watts. It's listed in the Sun article, under "Physical characteristics" in the box on the right. MrRedact 03:58, 16 November 2007 (UTC)
- Oh! Thanks; my mistake. I didn't realize luminosity referred to this. Thanks again! --pie4all88 06:30, 16 November 2007 (UTC)
- It's about 3% more if you also count neutrinos; see proton-proton chain reaction for details. And there's also the solar wind carrying away about 109 kg/s (equivalent to about 1026 W, about a quarter of the mass loss due to electromagnetic and neutrino radiation). Icek 07:58, 16 November 2007 (UTC)
- Oh! Thanks; my mistake. I didn't realize luminosity referred to this. Thanks again! --pie4all88 06:30, 16 November 2007 (UTC)
[edit] Chemistry tool
what is the name of this tool used in chemistry Picture [[2]] —Preceding unsigned comment added by 71.98.111.198 (talk) 04:05, 16 November 2007 (UTC)
- It's called a retort. —Steve Summit (talk) 04:08, 16 November 2007 (UTC)
- ...and what you use it for is to boil a liquid in the bottom of the speherical part - so that the vapor rises up to the top. The liquid then condenses onto the colder glass at the top - and runs down into the spout so you can collect it. This can be use for distilling purer liquids from impure. SteveBaker 04:39, 16 November 2007 (UTC)
- Nice, errr, retorts, gentlemen. Rockpocket 07:43, 16 November 2007 (UTC)
[edit] Genetic Algorithms
Does anyone know how genetic algortihms can be used for multivariable function optimization? I searched on the internet but couldnt find much help. —Preceding unsigned comment added by 202.83.169.98 (talk) 06:43, 16 November 2007 (UTC)
- Do you have any constraints on the form/representation of the function? If not then one has to know the optimization problem in order to know which representation might be useful. Icek 08:10, 16 November 2007 (UTC)
- Genetic algorithms work in a variety of ways. For example, if you have a problem with a "hilly landscape", meaning that as you approach an optimal solution your results get better, then you can use genetic algorithms as a hill climbing mechanism. To implement this, create a "seed" population with random values for each of the variables. Those values will be each individual's "genes". Then, test all of the population to find the individuals that "score the best" (most closely resemble the solution you're searching for.) Then "breed" your population by mixing the genes of the top performers to create new individuals (replacing the poor performers), perhaps with slight "mutations" (minor changes to the value of a few variables) to some offspring. Then, rinse and repeat the testing and breeding steps, usually many thousands of times, until you either find a solution or have stopped seeing improvements for a number of generations. Please be aware, however, that you may get stuck in a local maximum if there are many hills and the population is too small. Because of that you may want to run the algorithm from scratch a couple of times to see if you get consistent results. Anyways, that's a simplified description of one way to do it, but there are many others, and the best method(s) will depend on your particular problem. -- HiEv 15:34, 16 November 2007 (UTC)
-
- My experience in choosing genetic algorithms to solve problems is that the "fitness test" decides the algorithm. Obviously, you have to model your solution somehow. Then, you have to have a fitness test to decide how well the solution works. It can be very difficult to develop a fitness test that works. Once you like your fitness test, you have a good understanding of how to genetically develop the solutions. -- kainaw™ 15:43, 16 November 2007 (UTC)
-
-
- Yeah, and you have to be careful with your fitness tests because genetic algorithms are often quite good at finding "creative" ways of taking advantage of the system to get good scores. It may, for example, take advantage of the fact that one solution is more common than others, and simply evolve to give that solution all of the time. -- HiEv 17:02, 16 November 2007 (UTC)
-
-
-
-
- Heehee. For brevity, I left out the example I use each time I teach genetic algorithm class. When I took my first GA class many years ago, we had to choose something simple to solve with a GA. I decided to make a calculator that took two single digit numbers as input and added them together. My fitness test was the absolute distance from the proper sum of the two input numbers. After a mere 100 generations, the little guys all started answering 9. I tried it again - same result. I tried it again - same result. Then, I realized that 9 was the average answer, so answering 9 all the time was an easy way to score well on the fitness test. -- kainaw™ 17:06, 16 November 2007 (UTC)
-
-
Guys, guys.....dont mean to sound rude but I asked about optimzation of multivariable functions. Everything you have told me, I already knew. I used GA to optimize the following function, f(x) = 10*sinc(x-10)-3*sinc(x-40)+5*sinc(x-30)+sin(3*x). I had a few problems at first. Like HiEv pointed out, my program got locked into the local maximum. Infact since the initial population was random, the results of my program were random too. I had 1 in a 20 chance of reaching the global. But I managed to fix that, by introducing mutations. Before I was using uniform mutations, but with multi non-uniform mutations, I was able to better my odds. Now the chances of reaching the local maxima are 1 in 10.
But all this was a learning process. My ultimate task is to optimize multi-variable unseen functions. But before that I need to know how to optimize multi-variable functions. And thats where I am stuck. I cant find help anywhere. Would you know of any link or a book I could consult?
Again I really appreciate the time you guys took to write. Thx. —Preceding unsigned comment added by 202.83.169.98 (talk) 09:44, November 17, 2007 (UTC)}
- It appears you are asking for a basic hill climbing algorithm. Did you check that article already? -- kainaw™ 14:00, 17 November 2007 (UTC)
- Did you try using a larger population like I suggested above? If you explain more specifically how you're attempting to find a solution, what you've tried, and some of your results (number of variables? range of values? population size? how soon until evolution plateaus on average? etc.) then we might be better able to give you some more specific answers. -- HiEv 22:17, 17 November 2007 (UTC)
- To create an EA optimizing a function using n variables, you basically create individuals consisting of a tuple of n numbers, one for each variable. If you have for example the sphere function f(x, y, z) = sqrt(x^2 + y^2 + z^2) an individual might be (3, 4, 0) and its fitness value is 5. The optimal point is (0, 0, 0) with a value of 0. To mutate an individual you can mutate every element of the individual, either independently or, if you like, using for a example a covariance matrix or some other form of coupling. For a start stick to the simple method of independently changeing each number.
- I really don't understand the rest of your question. You use a GA to optimiza a single variable function? What is a crossover operation in that case? Perhaps you should define your current algorithm for a start, it is most likely not a GA. —Preceding unsigned comment added by 84.187.113.230 (talk) 14:40, November 18, 2007 (UTC)
[edit] Heinz Tomato Ketchup
In the old style bottle of tomato ketchup if you did not shake the bottle before use you would get a small amount of runny 'tomato sauce water'. I had always assumed that it was some sort of colloid and that a little separation had occured at the top of the bottle between uses as it sat in my fridge. However, the new bottles are 'upside down' and dispense from the bottom. Normally I would shake the bottle as a matter of routine, however yesterday evening I forgot and was surprised to see some of the aforementioned 'sauce water'. The sauce is stored dispenser side down in my fridge and dispenses from the bottom, unlike the old bottle which had the dispenser on the top and was stored dispenser side up. Am I misunderstanding the process here or does separation occur at the top and bottom of the bottle? (I was adding it to the side of some Tesco bacon and leek quiche for the curious, hardly sophisticated but terribly tasty after a minute in the microwave). Lanfear's Bane | t 11:07, 16 November 2007 (UTC)
- Ketchup is a "thixotropic" substance; it has a very high viscosity if you don't exert a stress on it - much higher than water (especially when cold, as it would be straight out of the fridge). In addition to this, ketchup is also much denser than water. As a result, when left to settle, you will get some separation at the top where the liquid floats on top of the sedimentary colloid, but also some at the bottom: because the ketchup is now acting almost like a solid, it can't fit down the the funnel-shaped part near the cap there is a small space between the lid and the ketchup. Excess water will then drip down into this cavity. The same effect wouldn't happen to the same extent with old bottles because the "bottom" of these bottles was practically cylindrical (although it probably would if you held them upside down). Laïka 17:45, 16 November 2007 (UTC)
-
-
- It is, but in my personal experience, it's not very effective (perhaps I'm not screwing the lid on tight enough) - it's a pain getting the slightly slimy water down all over your hand every time you use it! Laïka 01:03, 17 November 2007 (UTC)
- Thank you for those excellent answers. Lanfear's Bane | t 16:52, 19 November 2007 (UTC)
-
[edit] A dog in motion
What is the physics law "a dog in motion stays in motion"? --124.254.77.148 12:04, 16 November 2007 (UTC)
- Could be a garbled or intentionally tongue-in-cheek version of Newton's first law of motion, which states "An object will remain at rest, or continue to move at a constant velocity, unless a resultant force acts on it". Gandalf61 13:08, 16 November 2007 (UTC)
- That would be Newfoundland's first law. --Milkbreath 13:31, 16 November 2007 (UTC)
- :-) Of course it should be pointed out that dogs do tend to be acted on by all sorts of external forces. Dogs have their own grand unified field theory in which all forces may basically be resolved into one force that is precisely opposed by the tension in a collar and leash. Also, I have observed that the force of attraction of a cool tiled kitchen floor is generally enough to reduce my dog's velocity to near zero on most summer days! -- SteveBaker (talk) 21:36, 16 November 2007 (UTC)
- That would be Newfoundland's first law. --Milkbreath 13:31, 16 November 2007 (UTC)
[edit] Universe ending next May?
Somebody above mentioned the Large Hadron Collider scheduled to go into operation next May and my jaw dropped when I read the "Safety concerns and assurances" section. I can think of 3 holes in CERN's argument that the probability "the LHC might trigger one of several theoretical disasters capable of destroying the Earth or even the entire Universe" is extremely small.
- 1. Though the standard model may predict that "LHC energies are far too low to create black holes," the whole point of the collider is to fill in the gaping holes of the standard model. Not a sound argument.
- 2. Cosmic rays may be millions of times more energetic, but the plan isn't to send single protons to collide. The bundles sent at blistering speeds will contain thousands of protons.
Are these all crackpot concerns? -- Sappysap (talk) 20:21, 16 November 2007 (UTC)
- 3 is regardless of the scientific issues the LHC will tackle.---- droptone (talk) 20:24, 16 November 2007 (UTC)
- When a scientist says "chance is extremely small" - they probably mean 1026:1 against or something. That's truly negligable. However, there is a long history of these concerns. The guys on the Manhatten project had a theoretical concern that the amount of neutrons released by the atom bomb might (at extremely small probability) cause a chain reaction throughout the entire atmosphere of the planet. When I first went to work from college, I worked at Philips Research Labs in the UK - they took me on a tour of the building on my first day - and somewhere at the back of the glass blowing shop, there was an enormous concrete slab set into the floor - a couple of feet thick - on a huge toothed track so that it could be winched back to reveal a stairwell going down under the building into a concrete bunker. The building had been around for a very long time - before the first world war - and it seems that it was being used to test some of Ernest Rutherford's ideas about the nature of the atom. They were doing various primitive atom-smashing experiments - and at they time, they literally didn't know what would come out when you smashed an atom - it could be nothing - or it could end the universe. So they thought carefully about it - built the biggest safety barrier they could reasonably afford and went on to do the experiments. The trouble with cutting edge physics is that there really isn't much point in doing experiments where you already know the answer. To specifically address your three concerns, I'd say that (1) might be of some very minor concern - but any black hole formed would be nano-sized - it would fall to the center of the earth and either evaporate due to Hawkins radiation or sit there eating an atom or two per year until the end of time. (2) and (3) are nonsense. The total energy from a thousand protons going at even a sizeable fraction of the speed of light is simply not going to produce enough cosmic rays to boil an egg - let alone destroy the universe! SteveBaker (talk)
- Here's an undergrad physics major's opinion: most of the arguments that "nothing can possibly go wrong" are, as you say, pretty weak. They depend on predictions of certain theories, but the whole point of building the accelerator is to look for new phenomena that disprove the theories! However, one argument is rock solid: the events produced at the LHC will be no different from events already occurring in the atmosphere every day. The only difference is that at the LHC, they will happen in a predetermined place, next to a bunch of big expensive detectors. —Keenan Pepper 00:08, 17 November 2007 (UTC)
-
- Nitpick: it wasn't the neutrons the Manhattan Project scientists were worried about. It was the temperature. Full report available here. --24.147.86.187 (talk) 00:54, 17 November 2007 (UTC)
- Nitpick squared: Why is it so hard for everyone to spell Hawking? Over at talk:theory of everything, one contributor consistently spelled it "Hawkings", didn't notice that everyone else spelled it differently, didn't bother to look it up. --Trovatore (talk) 03:52, 17 November 2007 (UTC)
- Nitpick: it wasn't the neutrons the Manhattan Project scientists were worried about. It was the temperature. Full report available here. --24.147.86.187 (talk) 00:54, 17 November 2007 (UTC)
I apologize for the previous display of crackpottery. It was a bit tongue in cheek. I'm sure that reactions in the LHC will be perfectly harmless and will vastly help our understanding of the universe. The problem I have is that this thing is gargantuan in scale with a circumference of 16.5 miles and a pricetag of $3 billion US...the world has never known such a mechanism. We're going to turn it on and find out what happens in a few months. Sometime in the future an even more colossul experiment will come along where its safety (similarly) is predicated on theories trying to prove its own safety. We might not be so lucky then. I don't want to be reading the newspaper and sipping coffee some morning in May just before some physicist yelps "oops" and I get cut off in midthou Sappysap (talk) 03:04, 17 November 2007 (UTC)
[edit] TV satellite downlink antenna shape? Datasheet?
On looking at the various footprints for SES Astra TV satellites in their neato flash map here, it's clear that they've done some clever stuff with footprints (for example to include the Canary Islands in footprints otherwise concentrating on mainland western Europe; the footprints aren't the simple bendy-ellipse you'd get from a simple circular-section parabolic antenna. I guess each transponder (of which modern sats appear to have around 15 to 20) has its own retargetable parabolic antenna (retargetable because it seems pretty common to retask a whole satellite to serve a different market) and they build that footprint out of a bunch of simple antenna prints. But I've not managed to find a proper diagram of an actual modern TV comsat to confirm this or show what can (and can't) be done. Is there anywhere I can get a decent datasheet for a modern TV comsat (all I can find are simplistic "energy makes it go!" type stuff)? -- Finlay McWalter | Talk 20:54, 16 November 2007 (UTC)
- I think they do steerable beams using Phased array antennas. You might be able to find out more by going directly to the manufacturers of the satellites, like Space Systems/Loral and Boeing. ---- Mdwyer (talk) 22:41, 16 November 2007 (UTC)
[edit] Horton GO229 German WW2 All wing fighter
Is there available any dvd or films or pictures on the horton go229 nazi world war 2 jet
- Pictures, yes. Horten Ho 229. You can find more through the external links in that article. -- Someguy1221 (talk) 22:10, 16 November 2007 (UTC)
[edit] Do Variations in Earth's Gravity Field Cause Variations in Air Pressure?
After reading about Earth's gravity field, and how altitude, latitude, and geological density cause variations in the earth's gravity field (amounting to up-to 0.5% between the pole and the equator), I have a question about the effect of gravity on air pressure. If sea levels can vary by as much as 200 meters due to variations in the strength of gravity, wouldn't the weight of the air column also vary with the strength of gravity to some small but measurable degree? There is mention of how buoyancy causes a decrease in apparent weight, but that's something else altogether.
Okay, it just now occurred to me that maybe there's no air pressure effect because air is compressible and water isn't. In other words, it's not just weight of the air column directly overhead, but the weight of air masses perpendicular to that vertical line. I don't know. Please help. DeepSkyFrontier (talk) 23:30, 16 November 2007 (UTC)
- I'm working on finding a proper reference, but I'm pretty sure that statement that topography and geology can produce 200-meter anomalies in sea surface height is wrong - I think it should be more like 2 meters, and that is on a very broad (many kilometers) scale - the bulges would not be noticed on the sea surface. Cheers Geologyguy (talk) 00:29, 17 November 2007 (UTC)
-
- I agree. I've removed the absurd claim of 200 m, but am also having trouble finding a cited number. Given variations like +/- 50 mGal, the sea level anomaly probably should be closer to a few to several meters. Dragons flight (talk) 18:01, 17 November 2007 (UTC)
-
-
- Hm. I'm really not sure that it's absolutely absurd. It sounds unreasonable, and is certainly more than I know how to expect. But, on the other hand, we know that variations amounting to several meters are caused by mild weather systems, and that the earth's rotation causes about 25 miles of bulging all by itself. If we're comparing the difference between polar sea levels and equatorial sea levels, and correcting for oblateness, why shouldn't a difference of ((9.780equator/9.832pole)-1=) 0.5% be able to cause a 0.5% difference in surface level all on its own? If we take only the water volume into account- about 3,800 m deep on average according to our Ocean article, then 0.5% would come out to about 18 m- and proportionally more for deeper areas of the ocean. But why wouldn't the effect extend into the earth's crust and upper mantle, causing a small upheaval wherever gravity is lower? Not talking about oblateness here, but difference in gravity due to oblateness and the small but complementary effect that is not attributable to the earth's rotation. As for the effect on the strength of gravity, I think variation due to oblateness is almost the only factor here. Earth's gravity map shows an almost insignificant variation according to the Gravity Recovery and Climate Experiment- for instance, high in the upper Mid-Atlantic and low in the Indian ocean. Variations of 80-to-100 milligals from highs to lows- which amounts to less than 1/100th of a percent of the strength of gravity- and thus would amount to less than half a meter of variance if only the ocean depth is taken into account. Again, I doubt that it's the only factor. 200 meters may sound high, but 2 meters sounds so low. Whoever wrote 200 meters wasn't doing us any favors by not explaining what it meant. DeepSkyFrontier (talk) 19:23, 17 November 2007 (UTC)
-
- The question reminded me of the equatorial bulge. --JWSchmidt (talk) 02:52, 17 November 2007 (UTC)
-
- Um, what about undular bores? They are atmospheric waves (thus differentiating in air pressure) shaped by the Earth's gravity. Does that count? They are, however, more related to changes in weather than gravity. They can also be very large. Hope this helps. Thanks. ~AH1(TCU) 15:12, 17 November 2007 (UTC)
-
-
- Are undular bores a tidal effect? Something more to ponder. Thanks for bringing it up! DeepSkyFrontier (talk) 19:23, 17 November 2007 (UTC)
-
- There is no question that there are variations in earth's gravity from place to place - and that air pressure is certainly affected by that...that's just basic physics. Each air molecule is feeling a net gravitational force towards the denser regions of the earth's crust - the molecules therefore move in that direction - the increase in the number of molecules arriving at that location will increase the pressure of the air until such time as the force due to the excess air pressure pushing outwards equals the excess gravity pulling inwards. So the air pressure is clearly going to be higher over areas of denser rocks or in the vicinity of large mountains or something.
- The degree to which that change is noticable is debatable. Typical gravitational variation is of the order of one ten-thousandth of the force due to gravity. If this reflected a change in air pressure of a similar percentage - it would be quite utterly negligable.
- SteveBaker (talk) 16:53, 17 November 2007 (UTC)
-
- For my purpose, "noticeable" should be replaced with "measurable and predictable." If the effect exists, then how strong could it be? I'm with you (SteveBaker, you're a hero of the reference desk. Thank you for weighing in). I'm guessing that it isn't measurable over the noise of the weather. Nonetheless, doesn't the weight of air mass lateral to the air molecules directly above a specific point act laterally to increase the pressure? Not instantaneously- especially with large distances involved- but eventually? What I mean to say is that if a low gravity location had lower air pressure, it would still be surrounded by air at normal pressure due to normal gravity- at least some distance away. Thus, being crowded by higher pressure air would, over time, cause air pressure to be higher in the gravitational sink. This would happen only because air is compressible. For gravity variation in the ocean, the same effect would express as a bulge in sea level since water is almost non-compressible. Anyway- speaking of air pressure- my instinct tells me that this would act to wash away almost all of the pressure difference caused by small local variation in the strength of gravity (again, I mean variation due to density difference, not due to oblateness). At the end of the day, "utterly negligible" would be be about right :) DeepSkyFrontier (talk) 19:23, 17 November 2007 (UTC)
-
-
- Gravity anomalies due to geology or bathymetry are on the order of (typically) 20-80 mGal as Dragons Flight says. These anomalies can be computed -- not by the GRACE experiment cited above, but by using the average sea surface height as measured to centimeter accuracy from satellite radar altimeters. So the kinds of variations in sea-surface height due to geology and bathymetry are going to be on scales of tens of centimeters. There are definitely also very-long-wavelength (tens to hundreds of km) sea-surface height variations, but they are on the order of a few meters vertically (2-4). here is one link for info about satellite-derived gravity measurements, and here is another. Those measurements were initially used to create predicted bathymetry maps, but as more sea-based bathymetry measurements, and more years of satellite data for better averaging were acquired, these data are now used extensively as first-look interpretation tools for inferring sub-sea geology. Cheers Geologyguy (talk) 21:54, 17 November 2007 (UTC)
-