Talk:Quantum computer/Archive 1
From Wikipedia, the free encyclopedia
Discussion tidy-up
Obviously quantum computing attracts a lot of attention, but with all due respect, this discussion page is a bit of a mess.
It would be much better if people used the separate sections of this Discussion (the table of contents is about half-way down...) to comment on particular bits of the article. Also there should be a separate section here for general quantum computing questions, as well as a lot of the speculation (i.e. things not related to the article's content).
I don't mean to whine, I just think we could be making much better use of the Discussion page to keep up a good article. Also, I would happily do some rearranging and filtering myself, I'm just afraid to hurt anyone's feelings by moving their comments...
-Svet - Tomatoman 20:26, 6 January 2006 (UTC)
Limitation on Possible Computation?
I am ignorant about Quantum computers, and I have a question. Are not all results determined statistically? If you want to distinguish among 10^100 possible answers of which only one is correct than do you not have to distinguish among 10^100 possibilities statistically without error? Does not this hint that maybe the uncertainty principle will raise its ugly, but effective head? In fact, will not this happen long before we get to the point doing calculations that we could not otherwise do?
---John Sellers Sept. 25, 200 5 (see http://www.sellers.com/)
Follow up March 13, 2006
- Although I am ignorant about Quantum computers, I know a thing or two. I think the article is excellent, but my question is a good one. Either the experts have a specific answer to this question, or they do not. In either case, some discussion about it would be informative and useful.
- Most people do not have much of a sense of big numbers, and when I spoke of 10^100, this is a number larger than the number of atoms in the Earth, which is roughly 10^50, and yet we can factor numbers larger than 10^100 and so have to discover even larger prime numbers for the purposes of encryption.
- Even relatively “small” sets of permutations get large very fast. If you represent all the permutations of a rubic's cube on 3 inch cubes, and then stack them, the pile would be about 5.5 light years tall! However, any intelligent 16 year old who is willing to spend thirty or forty hours on it can solve the rubic's cube without quantum computing (if our brains don't somehow use quantum effects).
- On other fronts, entropy says that we can't build a perpetual motion machine because entropy always increases. If such a machine had a ratchet to make it turn in a particular direction due the shaking of Brownian movement, the probability of ratchet mysteriously spontaneously jumping up in such a way as to make it go backward, is just right to assure that our machine on the average will not turn forward or backward by itself no matter how we design it.
- It is not unreasonable to ask that if we make a machine which will pick our answer, whether there is some mechanism from preventing us from doing so. After all, we have to sort through more choices than all the atoms in the Earth and without making a mistake, we must pick the exact right one. Especially since we get our answer by "throwing the dice".
- Please remember that entropy is purely a mechanism of statistics and owes its inevitable increase to the most highly unlikely probability of it decreasing. In addition we know of limits of certainty derived directly from the probabilistic nature of measurement. For example if we know the speed of a particle accurately, we can not know its position accurately. Isn't this suggestive that in our search for information from a quantum computer measurement may have some fundamental limitations as to what we can learn? Has anyone actually solved any problem with quantum computers that we can't solve otherwise? The fact is, quantum computers are completely unproven for actually doing non-trivial problems.
- Again, the answers from Quantum computing are also purely a mechanism of statistics, so again it is not unreasonable to ask if the same constraints, namely those of entropy might not apply and limit what we can possibly do with Quantum computers?
- I am not saying Quantum computing won't work, but I have the sense that there is something similar between getting multiple probabilistic answers from a machine while excepting them to converge to an entropy-free answer, and the repeated probabilistic ratcheting of a perpetual motion machine while expecting it to go forward.
---John Sellers March 13, 2006 - wee hours
---John Sellers March 13, 2006 - afternoon - Yesterday, when I wrote my follow-up, I did not have an example of what kind of mechanism might stand in the way of our using quantum computers to solve otherwise unsolvable problems. After sleeping on it, I woke up with some interesting reasoning about how this might be.
Please understand my comments in the proper context. I hope that quantum computers do work out, because of the problems they would be able to solve for us. However, I worry about the unregulated and unquestioning enthusiasm I see for the idea. Only a sense of proportion tied closely to the actual reality is going to take us in the right direction. When ideas are carried out without proper balance, they tend to either burn out when things don't work out as expected, or never get off the ground because they do not gain the attention they are due. If balance is achieved, the overall final outcome is much more likely to be better for everyone on average and less risk to each. I do not see this balance in considering quantum computing, else most comments would address the cons as well as the pros. In this day and age, everyone is doing the 'hype'. You walk into any supermarket and most items on the shelves try to look bigger than they really are...and that is not good for the best result all around. I more often see instead what I have seen several occasions here in Silicon Valley, namely grandiose and expensive efforts involving many people which turned out to be “putting roller skates on a dead horse”, instead the more thoughtful support of many small, competent groups covering many diverse efforts in many directions over long periods of time, taking on the roadblocks on one at a time as they are discovered, and knocking them down in a relatively inexpensive and effective manner. Fewer people would get extremely rich this way, but a greater number of people would make a reasonable and comfortable middle-class living. Anyone competent investor knows the importance of diversifying. I'm sometimes puzzled why there are so many razzle-dazzle schemes when they are so non-optimal and risky. Of course we need bigger companies as well for efforts that small operations can't do, but even there, the better approach would be diversified effort among smaller groups rather than the typical larger and more risky efforts. which get go away completely kill an effort if they fail. Large projects are too expensive to try multiple times, but small projects can afford to try until they get it right.
So on with my argument and questioning. The promise of use of quantum computers is tied to examining huge numbers of cases in order to find an answer. But as soon as you deal with large numbers, the conditions for success require the compounding a huge number of possibilities without any artifact what-so-ever popping up and systematically masking the correct solution. The question you have to ask is, how likely is such an artifact? Let's examine a few cases and see what is indicated.
For the first experiment, let us assume that we have monkeys at typewriters. For simplicity lets assume that the typewrites have 26 keys and a space bar, but no capitals or punctuation. Each key is equally likely to be typed. How much of the Gettysburg address can we expect to be typed once on average for every 10^100 tries? The answer is the number of characters it will take for there to be 10^100 number of possible answers, each equally as likely and of which only one is correct. Then on average we would expect that much of the address to be typed once every 10^100 tries. This turns out to be 100 x log(10)/log(27), or between 69 and 70 characters and would look like to this:
“four score and seven years ago our fathers brought forth on this cont”...
What this tells us is that if there is any artifact what-so-ever, which has more likely than one chance in 10^100 of occurring, it is most likely to occur. Now just suppose that the artifact has only one chance in 10^88 of occurring. That is still very unlikely, in fact, in this case our monkeys would likely type, out on average, a fraction of a character more than the first 61 characters of the Gettysburg address. However, such an artifact would occur an average a trillion times in the course of our quantum computer investigation our 10^100 cases. When we do our statistics, it may very well be if there is any arbitrarily happenstance event which can occur. Just in the same way that nature hates a vacuum, nature will make sure that any free molecules floating around will certainly fill that space.
If a single Quantum computation and checking the result took an hour, we might have to wait a trillion hours to verify our result. Presumably, results are at least somewhat resistant from helping us with the next trail, because if this is not so, we could use conventional means without quantum computers to incrementally improve on the problem, such as the computer memory industry where the early attempts with the IBM PC had us install an 8000 byte board, and today we can easily install a 1 gigabyte board.
It gets worse than this. If there is likely to be one such artifact, there may likely to be a significant number of other artifacts. We can't investigate this situation easily, but we can look around and see how the world behaves in other situations of probability. I would like to present three examples along these lines. One divorced from quantum effects, one which is closer to quantum situations, and another which quantum effects strongly play a roll. Let us see how they behave.
In New Mexico there is Carlsbad Caverns. This unusual cave is a virtual fairy land of interesting cave structures. In one place you will find a room full of cave pearls formed by the happenstance dripping of water which slowly rotates the forming rocks solidifying material over the centuries to form hundreds of perfectly round rocks. In another room with slightly different conditions, you will find wonderfully delicate and intricate lace-like structures that formed over the ages. Yet in another room you will find large almost smooth structures big as tree reaching to the ceiling. In other places you find structures hanging from the ceiling that look like icicles. Here is the point. These structures are all relatively close to each other in the cave. You can go right from one room to the next to see all this wonderful variety. And yet it all results from seemingly random and arbitrary differences very stable conditions. In other words, it seems the requirement for each of these rooms is that the conditions are stable and ordered enough, and that the temperature, amount of water, the way it is applied, and material dissolved in the water are each within a certain range. The surprise is that depending on the exact values within this range, one of a surprisingly large variety of characteristic results will occur as they accumulate over time. So if we were to set up a new room for a few hundred years with stable conditions within the critical ranges, but different from all the other rooms, we would have a good chance of some different, interesting result.
When we consider this in light of the many other things we know about our world, we can reasonably hypothesize that the more ordered a situation, the more likely some seemingly arbitrary and unknown systematic effect will occur, that effect being more likely than any other effect in its favored set of circumstances. It would be extremely difficult to predict the particular effect which would occur, such as the cave pearls, without actually performing the experiment over the extremely long period of time it takes the pearls to form. The point is, that in quantum computing, we are seeking to find the absolute order of a particular situation, by looking more and more closely to the perfectly ordered solution, and somehow having to avoid the possibility that some other unwanted artifact of ordering is more likely to occur than the result we desire. Also as we are having trouble with too much order, we will also have trouble with not enough...eddy currents in super-conductors is an example of this.
This kind of ordering does not occur just at Carlsbad Caverns. A simple experiment with an old-fashioned oscilloscope will produce the same sort of amazing variety of artifacts. Take an oscilloscope and sum several pure sin waves of different frequencies, amplitude, and phases on the x axis, and take another different and independent set as input to the y axis. Next put a spike generator on the z input, which will blank out the trace on the screen except at the instant of the spike. You should make the frequency of this z input such that is a little lower than the input frequencies so we will see an arbitrary distributed sampling of the inputs rather than a string of closely spaced points. This will result in a large number of points moving around the screen in an beautifully symmetrical and intricate pattern with each point most often seeming to move independent from all the other points. These are the next order of complexity above Lissajou figures and have similar characteristics in that if you incrementally change any component of any harmonic input in just about any manner, you will get a new and profoundly complicated dance of points on the screen that seems to be completely different than the one before. The possible variety of these patterns is for all practical purposes is infinite. Each setting produces a maximally rich and different dynamic dance and ordering of the movement of the points. In fact, by coordinating the settings we could archive any analytical pattern we desire. It is interesting and significant that just about every setting will produce close to a unique different dance and yet arbitrarily similar to a near infinite number of other dances. Some are similar to others, but most often are different is some large way, and quickly jump from one pattern to a completely different one depending on how close our relative settings are to any set of harmonic ratios which densely map every possible value on the real number line. (any value on the real number line can be approximated to any degree of precision by a fractional ratio). This dance is in fact somewhat related to the dance of electrons in an atom, and its variety is governed by similar rules, except that we have explicit control over an arbitrary number of independent input frequencies, amplitudes, and phases.
This enforces what we learned from the previous experiment. That ordering under arbitrary stable dynamic conditions is likely to produce systematic artifacts. I believe that we will discover the same is true for Quantum computers as we carry out calculations. However, it may be that these can be estimated by quantum calculations developed for the purpose.
For my last example, I would like to talk about optical gyroscopes. It has been some years since I have read about them, so my information may not be up to date, however, they have been the most accurate way to measure rotation. This is done by sending light in opposite directions through a fiber optic winding and then when the two light paths return, putting them through an interferometer. The light is in phase or not, and as the gyroscope rotates, one path is long and the other is shorter because of the transit time for the light to go through the windings. The amount of phase shift is continuously measured and the amount of rotation over time can then be calculated. The interesting part for us is that there are significant number of artifacts that reduce the accuracy of this method. For example, the Zeman effect of a magnetic field on the light causes it to spread over a range of frequencies and thus effects the accuracy of measuring of the frequency shift. If I remember correctly, there are so many artifacts that the reduction in precision of the devices built at that time was about 14 orders of magnitude. It also turns out that these loses of precision are mostly, if not all, quantum in nature. Methods, used to predict precision or to minimize the uncertainty in these devices, are calculations associated with the uncertainty principles of quantum physics.
Again what we learn is that in real world applications reaching down into the level of quantum effects are often limited by quantum effects and uncertainty.
There may be hope however, giving the optical gyroscope as an example. Even with the lose of 14 orders of magnitude of precision, these gyroscopes were still the most accurate method of measuring rotation known to us.
There is another "unexpected" outcome from this note. After writing it, I notice that maybe an "unexpected" result might not be such a bad thing. When we discover new kinds of ordering. This leads us to new kinds of discovery. Maybe we can not use Quantum computers for breaking security codes, but maybe we can use them for their unpredicable and highly ordered results as a tool for finding new and useful ideas. ---John Sellers March 13, 2006 -
The statement "There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false." seems to imply that quantum computers are not Non-deterministic Turing Machines. If this is true, please state so.
-- Christoph Reichert, 9 August 2005
- Quantum computers are not Non-deterministic Turing Machines. :) NTMs can just assume they already have the answer and just verify it. A quantum computer has more work to do than that; it's first guess may not be correct, among other differences. --David Battle 21:50, 9 August 2005 (UTC)
I have added yet more info on the size estimate of the state machine needed to simulate a N qubit register (with an example given for a 640 qubit register) at the bottom of the page. --David Battle 02:00, 8 August 2005 (UTC)
"For problems with all four properties, it will take an average of n/2 guesses to find the answer using a classical computer. The time for a quantum computer to solve this will be proportional to the square root of n".
Shouldn't that be log(n)? If not, a little explanation on this rather unexpected outcome would be appreciated...
Apart from that it's a great piece of work! --MK
This quote in the article is nonsense:
"Integer factorization is believed to be practically impossible with an ordinary computer for large numbers (eg. 10^600)."
While that's a really big number, a high school freshman can factor that big number in his head! 2^600 x 5^600. Gee, that was tough!
--Seinfeld's_Cat
- Thanks for spotting this. I changed the article accordingly. Of course, you could also have changed the article yourself. -- Jitse Niesen 15:06, 30 Jan 2005 (UTC)
-
- No problem. It sometimes seems easier to complain than to just fix it. I like your change (I couldn't come up with a suitable fix; you did it quite well.) I make a better critic than author. -- Seinfeld's_Cat 1 Feb 2005
The talk page is as interesting as the main article. Amazing! --ReallyNiceGuy
Great article!
Maybe we could add in the first paragraph that quantum computers are by their very nature probabilistic devices and only probabilistic algorithms can be implemented on them. Also, quantum computers can be simulated by Turing machines and are therefore no attack to the Church-Turing thesis. --AxelBoldt
I've now added both. They're in the complexity theory section, where they seemed to fit the flow best. -LC
Yes.
In the complexity section, it says that BQP is the class of problems that can be solved by quantum computers. These however are only the problems that can be solved by quantum computers in polynomial time. Maybe you could say "the problems that can be solved by quantum computers in reasonable time" or "that can be realistically solved by quantum computers".
Then it says that quantum computers probably cannot solve all NP-complete problems. There are two problems with this statement: strictly speaking, a quantum computer only works probabilistically and cannot "solve" any NP-complete problem (or any other decision problem for that matter) in the same sense a Turing machine solves them, with a deterministic and correct Yes/No answer. Furthermore, if we allow probabilistic solutions, then quantum computers can of course solve all NP-complete problems, just like any Turing machine can; it may just take a lot of time to do so... --AxelBoldt
It should be clearer now. -LC
What is this #P-Complete ? --Taw
See #P-Complete. -LC
How long does it take to factor an n-digit number with n qubits? --Axel
Would Quantum Computing break Elliptic Curve Cryptography ? Taw
- Apparently so, through a variation on Shor's algorithm. I haven't studied it, though -- CYD
Some recent work [1] indicates that if spaced sufficiently closely, quntum entanglement between quantum dots may be possible, so it's possible that in the future a quantum computer could be implemented using quantum dots.
Also, it might be useful to mention "reversibility", the haddamard(sp?) transformation and the various types of quantum logic gates (CNOT, etc...). I'm not an expert, so I'll defer to someone who knows about this stuff -- Matt Stoker
What does it mean to have a "quantum computer with nonlinear operators"? --Robert Merkel
I hope that's clearer now. --LC
How the heck do you implement a nonlinear operation on qubits? Evolution always proceeds by unitary - therefore linear - operations. -- CYD
I agree, that seems to be a consequence of Schrodinger's equation, which is pretty basic to QM. --AxelBoldt
I agree. The papers proving the result never said it could be done. But it's an interesting result. It hasn't been ruled out that a large linear system could act like a small nonlinear system, and give the desired result. The Shi paper refers to the nonlinearity of the Gross-Pitaevskii equations, but I'm not familiar with them. I would assume the Shi paper is flawed, since it wasn't accepted anywhere, but I'm not aware of any proofs that this sort of thing is inherently impossible. --LC
Oh. I just checked the site, and the Shi paper has now been accepted in a journal. It hadn't yet been accepted when I first wrote the article. I'll remove the "not peer reviewed" note. Is anyone here familiar with the "Internation Journal of Modern Physics"? Is it generally respectable? --LC
- The quantum computer in the above example could be thought of as a black box containing 8 complex numbers. Or, it could be thought of as 8 black boxes, each containing 1 complex number, each sitting in a different alternate universe, and all communicating with each other. These two interpretations correspond to the Copenhagen interpretation and many worlds interpretation, respectively, of quantum mechanics. The choice of interpretation has no effect on the math, or on the behavior of the quantum computer. Either way, it's an 8-element vector that is modified by matrix multiplication
I don't think this characterization of many worlds is correct. The different universes don't come with complex numbers attached. Instead, the more likely states are exhibited in more universes. The goal of the matrix manipulations is to bring essentially all universes into the same state. Once you measure the state, the universes stop to communicate and truly split. --AxelBoldt
Good point. I've reworded it. --LC
In the "How they work" section it says:
"The square of (a+ib) is (a^2+b^2)."
Actually the square of (a+ib) would be (a^2-b^2) + i(2ab).
Probably you mean the norm or the mod or something like that? (it's been a while since i used complex numbers, not sure of the name any more).
In quantum mechanics, (a+bi) is called the amplitude, and (a2+b2) is called the squared amplitude, even though the latter equals |a+bi|2 rather than the (a+bi)2 that the name might suggest. I suppose the terminology is counterintuitive. I'll reword the article to make it clearer. --LC
I've removed the following contribution from User:Harry Potter:
- Quantum computing can also theoretically produce time anomalies. The ability of the Quantum computer to find the correct answer in a factorisation problem can be seen as running all the possibilities simultaneously. However, only the correct answer produces a positive response which is then sent back through time to become the only calculation which in fact needs to be made.
Quantum computation doesn't involve time travel in any description I've seen. -- Oliver P. 00:01 9 Jun 2003 (UTC)
- Agreed. This sounds purely abstract and theoretical to me, but I am no expert on quantum computing by any means. Perhaps Harry Potter can produce a source in the quantum computing literature that lends some support to this idea? -- Wapcaplet 00:10 9 Jun 2003 (UTC)
Please specify "... have recently been built" into "... have been built in the early/mid-/late 20th century/21st century". --Menchi 08:37 14 Jul 2003 (UTC)
I found the first paragraph of the second section, titled "The Power of Quantum Computers", rather unclear. Unfortunately, I cannot tell exactly what makes it unclear - maybe it is too vague; how difficult is factorization (suspected to be) on a classical computer, and what is the speed-up on a quantum computer? I do not know enough about quantum computing to rewrite the paragraph myself ...
I liked the rest of the article very much, it contains (almost) everything that I was looking for. However, I am still wondering whether there is an upper bound on the power of quantum computers. The article says that quantum computers cannot solve undecidable problems, but is BQP known/suspected to be a subset of another complexity class, say NP? -- Jitse Niesen 11:08, 26 Aug 2003 (UTC)
They CAN solve Turing-unsolvable problems, but...
This is a technicality that should be mentioned: If you define a quantum computer as a Turing Machine with arbitrary complex amplitudes, then the class of machines that you obtain is uncountably infinite, and can easily be shown [ADH97] to contain machines that solve all kinds of Turing-unsolvable problems. But *we* can't build those QC's (or even know one if we see one) because of our inability to know the correct amplitudes required for those tasks. The computability comments in the present version of this article are valid for QC's whose amplitudes are computable numbers.
[ADH97] L. Adleman, J. DeMarrais, and M. Huang. Quantum computability, SIAM Journal on Computing 26:1524-1540, 1997.
Key exchange versus encryption
You say:
"This ability would allow a quantum computer to break many of the cryptographic systems in use today. In particular, most of the popular public key ciphers could be quickly broken, including forms of RSA, ElGamal and Diffie-Hellman."
My understanding is that RSA is the encyrption method and the ElGamal and Diffie-Hellman are key exchange protocols. The references within Wikipedia also say that. Do you mean that the encyrpted key being transmitted is able to be read? BK
Further editing
Some of the section which was there before (and is now called Initialization and termination), though written at a good level I believe contains some innaccuracies, i.e. on how the qubit registers are measured. In fact, only a fixed register is sampled on termination.CSTAR 04:27, 6 Jun 2004 (UTC)
Oops that's not what I should have said. Anyway I think it's right now. I also split the article up adding the stuff on reversible registers to quantum circuits.CSTAR 07:01, 6 Jun 2004 (UTC)
Where did the numbers come from in columns #2 and #3 in the "bits vs qubits" section?
Why is the 3-qubit "000" represented as "0.37 + i 0.04"? Where did the 0.37 and 0.04 come from? Ditto for the rest of the numbers in that table. According to www.qubit.org, a 3-qubit system can represent 8 values. There is no magic in 16 analog values vs 8 complex values, there are just 8 values total. None of the references I have come across mention anything other than an L qubit system representing exactly 2^L possibilities at once, not 2^(L+1) (the 16 you mentioned in this section).
In an effort to give the reader some sense of how probability waves can be represented by complex numbers and how complex numbers c1an be manipulated, you've obfuscated the simpler facts of qubits. 1 qubit holds 2 states simultaneously with *equal probability*, 2 qubits hold 4 states simultaneously with *equal probability* and so on.
- I didn't write this example, but as an example, I think it's mostly OK (But you're right about the dimension being wrong: see below. When I read it I wasn't thinking) . What the example is trying to express is that fact that in a quantum computer the set of registers is a quantum mechanical object,and the state is described by a wave funbction on it. As far as the number of independent values, the example is meant to suggest the real dimension of the space of states, but you may be right the dimension counting may be wrong, because the relevant space is complex projective space of dimension 2n. Thanks. I'll fix it. The set of pure sets is homeomorphic to the compact symmetric space with m = 2n
- As far as your statement:
- 1 qubit holds 2 states simultaneously with *equal probability*, 2 qubits hold 4 states simultaneously with *equal probability* and so on.
- I don't believe this statement is correct.
CSTAR 03:10, 16 Jun 2004 (UTC)
P.S. From this log, it wasn't clear who wrote the question and who wrote the reponse, so I went back and edited the past, by separating my answer to the anonymous question. I'm perfectly happy to edit the past. More of us should try it.CSTAR 04:48, 16 Jun 2004 (UTC)
- I think the table of numbers in the bits vs. qubits section obfuscates the concept of superposition. It might be better to start with the simpler 2-qubit case and say something like: "with two qubits a quantum computer can be in a state which is a mixture of all the classically allowed states. The particular state is determined by four complex numbers. In quantum mechanics notation we would write:
- where α, β, γ, and δ are complex numbers." Bra-ket notation may also make things harder, but it is at least important to give the general version before the specific example with numbers.Bjohnson00 20:39, 29 July 2005 (UTC)
-
- Perhaps you´re right. As mentioned earlier this example has gone through various stages of editing. íf you can change it without doing too much violence to the article, go ahead.--CSTAR.
-
other confusing bits
(problem I noted has been fixed -- thanks, everyone. -- DavidCary 01:43, 4 Jul 2004 (UTC))
-
-
- Yes, it's rubbish, I rewrote the section. Unfortunately the bit about fault tolerant scaling in Shor's algorithm is a bit hazy, I'm just quoting vague unpublished numbers floating around my research group. -- Tim Starling 07:35, Jun 18, 2004 (UTC)
-
-
-
-
- Good rewrite! Regarding range for T2, I checked Nielsen & Chuang book (pp. 337) and they claim 7 seconds and 0.3 seconds for proton and carbon in a two-qubit experiment. Should we change "hundreds of microseconds" to "seconds"? Or is there some caveat about Nielsen & Chuang data that I'm not aware of? Andris 18:37, Jun 18, 2004 (UTC)
-
-
-
-
-
-
- Alright. It would be nice to explain at some stage why systems such as chloroform nuclear spins are not scalable, but I guess "nanoseconds to seconds" will do for now. -- Tim Starling 01:47, Jun 22, 2004 (UTC)
-
-
-
The article currently mentions the transverse relaxation time T2 (terminology used in NMR and MRI technology)
Is there really any difference between NMR quantum computers and MRI quantum computers ? I suspect they're really the same thing. -- DavidCary 01:43, 4 Jul 2004 (UTC)
- I guess my comments weren't clear -- I was referring to MRI and NMR technology not computers (MRI technology, the kind used in hospitals) which are essentially the same technology (I think at least they are, since the both involve involve nuclear spin), but seem to have separate wiki articles. Is my making these two references too confusing? Please change it if you can make it clearer. CSTAR 01:51, 4 Jul 2004 (UTC)
Interpretation
-
- Now that we're at it, I think we should remove references to interprertation (Copenhagen, many worlds etc..). Regardless of content, they shouldn't be in this article. I haven't made an effort to understand whether what's writtem in the article makes any sense, so I won't pass judgment on that account_. CSTAR 13:19, 18 Jun 2004 (UTC)
Edits of 08/01/2004
Some of the edits by an anonymous user did increase readability, but I think calling entanglement or superposition a process is misleading. Also Qubits don't hold data -- they are measures of the size of data. The register space is the actual container of data. I tried to keep the spirit of the recent edits, but modifying what seemed misleading. CSTAR 02:49, 2 Aug 2004 (UTC)
---
So, how the hell are you supposed to make software for these? I assume a normal "if something is something do this, else do this" doesn't work?
- Conditionals in the usual sense, no.CSTAR 20:08, 16 Aug 2004 (UTC)
Decoherence and Complexity
This article is really great !
Nevertheless, there is one point I found unclear : the relationship between decoherence and complexity.
The article state some algortihms are in BQP. Since complexity classes are only meaningful for arbitrarily large calculations, does the definition of the BQP class includes the fact that decoherence may limits the amount of operations that may be performed, or does it assumes we have a way to prevent the effects of decoherence for arbitrarily long calculations (which doesn't seem obvious) ?
For example, an old (1995) article by P. Shor and coll. "Quantum Computers, Factoring and Decoherence", calculated a limit to the size of the integers that may be factored faster by a quantum computer than by a classic computer, which to me would contradict the fact that factoring is in BQP.
Some clarification of this point would be great !
Thanks !
- That's a good question, but I'm inclined to say that the there is no relationship othre than that which is buit into the model itself. To explain rigorously BQP, you have to formulate a more precise model of quantum computation than is done in this article. The right model is that of quantum circuit, although BQP is not defined there either, although it should be. At some point I'll get around to it. Otherwise look at some of the references such as Kitaev, or Nielsen Chuang. The book by Hirvensalo is also good. I know I have evaded your question but that's the best I can do. Maybe some other wikipedian has a better answer.CSTAR 13:57, 1 Oct 2004 (UTC)
- The definition of BQP assumes that computation is error-free (and decoherence-free), which is not the case in practice. The limit in Shor's 1995 article is for quantum computers which are subject to decoherence. Thus, there is no contradiction: the 1995 paper essentially says that if there was not decoherence, quantum computers could factor large numbers, but in presence of decoherence, they could not. This is the (pessimistic) view of quantum computation that was common in mid-1990s. This view is mostly obsolete now. Quantum error correction has been invented (in a later work of Shor and colleagues in 1996 and 1997) and it has been shown that arbitrarily large computations can be made fault-tolerant (with "fault" including "decoherence"), as long as fault rate is sufficiently small. Andris 00:10, Dec 11, 2004 (UTC)
Cost of quantum computations
The big selling point of quantum computations seems to be that quantum computers with enough qubits can run algorithms much faster than classicalcomputers. That's great, but it's not very useful if, for example, the cost of implementing an n-qubit register is exponential in n. Is there some estimate for the asymptotics of this? Is the apparent advantage of quantum computers due only to the fact that we're measuring computational complexity by the wrong yardstick? (I have no idea, but someone surely knows...) --Andrew 08:48, Dec 7, 2004 (UTC)
NP-Complete stuff
"BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false."
A teacher claimed that someone has found an quantum algorithm to solve NP-complete problems in polynomial time. Asking for a size estimate, he said that the algorithm description should approximately fit on a page. He's supposed to be teaching everyone the definition of NP-complete problems next week, so he should know what he's talking about. He seemed rather sure. I'd like to see that algorithm... (Haven't had time to search the internet yet.) Would sure make quantum computers (even) more interesting... Κσυπ Cyp 17:27, 9 Dec 2004 (UTC)
- Regarding the previous quote: Are integer factorization and discrete log (suspected to be) outside BPP? I think this is more interesting than whether they are outside P. -- Jitse Niesen 19:05, 9 Dec 2004 (UTC)
-
- Yes, they are thought to be outside BPP. Andris 04:32, Dec 10, 2004 (UTC)
- Thanks, I updated the article. -- Jitse Niesen 13:42, 10 Dec 2004 (UTC)
Clustered quantum computer ?
Hello,
I don't understand a lot about the technics of quantum computers. I would like to know if it is possible (at least in theory) to distributed quantum computing. it is possible to dream about a “quantum grid”, “quantum Beowulf”, “quantum@home?”
Thank you
? David Latapie 07:31, 25 Jan 2005 (UTC)
Can quantum computers solve undecidable problems?
This article says that quantum computers cannot solve any problems that classical computers cannot solve either. However, after a recent edit, the start of Matiyasevich's theorem reads:
- "Matiyasevich's theorem, proven in 1970 by Yuri Matiyasevich, implies that Hilbert's tenth problem is unsolvable (although it has recently been proposed by Tien Kieu [Int.J.Theor.Phys. 42 (2003) 1461-1478] that this result only applies to a deterministic Turing machine, and that the problem could be solved on a quantum computer using a quantum adiabatic algorithm. This is still contraversial.)."
How to reconcile both quotes? Does anybody know how contraversial Tien Kieu's work is? -- Jitse Niesen 13:36, 10 Feb 2005 (UTC)
-
- Probabilistic Turing machines or non-deterministic Turing machines don't decide more problems than plain old Turing machines. Neither does the quantum circuit model (and equivalents). Unfortunately, I don't know enough what a quantum adiabatic algorithm is but it possibly doesn't fall under the quantum circuit model of quantum computation. I suppose I should look at the paperCSTAR 14:41, 10 Feb 2005 (UTC)
-
-
- I just looked at one of his papers. It is highly suspect, making claims about probabilistic computation which under any normal interpretation are false. "Probabilistic computation may be more powerful than Turing Computation provided a suitable probability measure can be defined." Now it is well-known, (see Sipster's textbook on computation) that probabilistic Turing machines compute the same functions as normal ones.
-
-
-
- I would take these claims with lots of salt (and maybe that sentence in Matiyasevich's theorem should be removed).CSTAR 18:15, 10 Feb 2005 (UTC)
-
Thank you; I came to the same conclusion. -- Jitse Niesen 21:47, 10 Feb 2005 (UTC)
Factorization
"For problems with all four properties, it will take an average of n/2 guesses to find the answer using a classical computer. The time for a quantum computer to solve this will be proportional to the square root of n...But it is also easy to defend against. Just double the size of the key for the cipher."
This doesn't sound right to me. If your key length is 2n, a classical computer would take n guesses. A quantum computer would then take time proportional to sqrt(2n), which is asymptotically the same as sqrt(n).
I didn't edit the page because I'm not an expert on quantum computing, and feel it's possible that I don't understand (if, for example, you can't do asymptotic analysis on quantium computing algorithms?). As best as I can figure, though, I would expect you'd need to square the key length?
Any problem with these four problems will take an average of (n+1)/2 guesses to find the answer: Fix the guesses you are going to try. Each guess then has probability 1/n of being correct. Then expected guesses is .
- n here is the size of the search space, which for a brute force attack on a symmetric cipher is equal to 2L where L is key length. So the classical algorithm takes O(2L) and the quantum algorithm takes O(2L/2). Note that this applies to Grover's algorithm, which is not the usual algorithm used for factorisation. It's a slower algorithm for a more general class of problems. -- Tim Starling 19:22, Jun 12, 2005 (UTC)
Spin Parity Componet Quantum Computers "transistor" is here
Just recently a new device was described for Quantum Computers. It has a quantum dot with two electrons within and an adjacent empty dot the interaction is only allowed when the electrons are in countering spins with each other and this enables quantum jumping of the electrons in a stable and readable way. Then measurments will not cause decoherance and we have in effect a quantum componet that brings us out of the vacum tube era of quantum computing.
Daniel Hazelton Waters
Update: As March was beginning I looked into the quantum computing timeline and there seemed to be more inovations in the first two months of this year then there was the last two!!!
Unfortunately I believe the term Quantum superstate only relates to the state at the time of interpretation. The actual state is derived from the superspin abilities of the given molecule, The principle state of the molecule is (a) stopped hence 1 bit or the factor 0. The second state is spinning on its (b) axis , the third state spinning from (c) pole to pole and the fourth spinning on (b) + (c) = (e) therefore you qubit problem is solved.
Fivetide
Can quantum computers ever be practical for factoring?
Can quantum computers ever be practical for factoring? I seriously doubt it. Consider the following:
As you may have heard, the "dimension" of an n qubit register is 2^(n+1) - 2, meaning that it has the same information content as 2^(n+1)-2 conventional bits.
To factor integers that are large enough to be hard on conventional computers right now you would need about 640 "qubits". For example there is an uncollected $20,000 reward for factoring RSA-640 which is a number with 640 bits. On the other hand, RSA-576 has been factored and the $10,000 reward collected.
There is something called the "holographic principle" in quantum physics which states that the maximum information content of a spherical volume of space is achieved only by a black hole of that size. The information content of a black hole in bits is believed to be the surface area of the sphere measured in planck length (l_p) units divided by 4. Yes, that is area, not volume. That's why its called the "holographic" principle. The planck length is tiny, about 1.6x10^-35 meters.
So I decided to calculate the maximum information content of a "reasonable" sized quantum computer. I generously estimate that such a computer might have the same volume as a sphere with a radius of 100 meters. That's a pretty big computer. Could such a computer hold enough quantum information for 640 qubits?
The area of a sphere with radius 100 meters is 4*pi*(100 meters)^2. In planck units this is 4*pi*(6.25e36 l_p)^2 or about 4.9e74 l_p^2. So the maximum information content of a sphere of radius 100 meters is about 4.9e74 l_p^2/4 or 1.2e74 bits. Another way to say this is about 2^246 bits. But this is nowhere near the information content of a 640 qubit register which is 2^641 -2 bits. And each additional qbit doubles in information content, so the required information is exponential in the number of qubits, while the maximum information content of space only increases as the square of radius.
Conclusion, using highshool math, it is easy to show that Shor's algorithm will never be used to factor integers that can't already by factored on conventional computers, at least not on this planet. The mass of a black hole with radius 100 meters is vastly larger than the mass of the earth (though not as large as the mass of the sun). Any smaller amount of mass would have lower information density.
--David Battle 14:22, 5 August 2005 (UTC)
- Re: David Battle's assertion:
- "As you may have heard, the "dimension" of an n qubit register is 2^(n+1) - 2, meaning that it has the same information content as 2^(n+1)-2 conventional bits."
- This is false. That is, dimension and information content are two very different things. For example, the interval of real numbers x such that 0 ≤ x ≤ 1 has dimension 1, yet contains more information than a single "conventional" bit. --CSTAR 16:28, 5 August 2005 (UTC)
-
- Ok. Can someone give a formula for the information content of an n qubit register then? Also, I don't see how you could say that something had a certain "dimension" unless there were at least two possible values on each "axis". In that case, my estimate for the information content of a quantum register is a lower bound, and doesn't change the validity of my argument.
- --David Battle 16:41, 5 August 2005 (UTC)
-
-
- Sir, you have just proven the universe does not exist! (for example, a spin 1/2 lattice system with 640 sites, indeed a very modest quantum mechanical system cannot exist)
-
-
-
- Good work! Now we can all retire and wallow in our non-existence!--CSTAR 17:17, 5 August 2005 (UTC)
-
-
-
-
- Perhaps there is not as much information stored in such as system as is currently believed. Also, I'm not exactly sure what a "spin 1/2 lattice system with 640 sites" is, but are you honestly suggesting such a system contains 2^640 bits of information? 2^(2^640) microstates??
- --David Battle 17:21, 5 August 2005 (UTC)
- These are fairly simple models for ferromagnets etc. Morevoer, I never said (nor does the article say) anything about bits of information, I said dimension. That's why I made the clarification above. See topological dimension and Bloch sphere.
-
-
-
-
-
-
- Your argument seems to be based on the assumption that a space of topological dimension n can be used to physically store (from an input device) n bits of information and physically retrieve n bits of information. For large n this is quite likely not true (for the reasons you suggest) but your argument has absolutely no bearing on physical realizabily of Shor's algorithm to factor numbers of the size you consider.--CSTAR 18:18, 5 August 2005 (UTC)
-
-
-
-
-
-
-
-
- Ok, now we're getting somewhere. Current SIMULATIONS of quantum computers use 2^(n+1) bits of storage to simulate a n qubit register. This makes it intractable to simulate more than 30-40 qubits. Based on your comments it would appear to me that there should be a way to simulate quantum computers using much less memory. Could you help me calculate how many microstates are possible in an n qubit register? Thanks,
- --David Battle 18:25, 5 August 2005 (UTC)
-
-
-
-
-
-
-
-
-
-
-
- "Based on your comments it would appear to me that there should be a way to simulate quantum computers using much less memory."
- Oh really? Where did I suggest that? I was talking speicifically about limits of information flow from outside to inside and conversely, not about how to simulate what happens inside.--CSTAR 18:34, 5 August 2005 (UTC)
-
-
-
-
-
-
-
-
-
-
-
-
-
- So how many microstates would you estimate are possible "inside" and n qubit register? I do not have the math skills to make this estimate, but I would love to know. Really.
- Thanks,
- --David Battle 18:52, 5 August 2005 (UTC)
- I don't know offhand, but surely you can calculate this by some simple lattice model (640 sites) with a reasonable Hamiltonian.--CSTAR 19:07, 5 August 2005 (UTC)
- Well, sadly, I can't. I'm a CS major. I've never studied lattices, and the only kind of Hamiltonian I'm familiar with is Hamiltonian paths in graphs, which I suspect is only tangentially related to what you are talking about. And I never really quite groked complex analysis. I would say most CS majors don't understand that stuff either. Computer scientists think in terms of state machines. If quantum computing is going to be understood by CS majors it is going to need to be described in terms microstates, and state transitions. Any system where you put some bits in, perform some "operations" (which is equivalent to putting some more bits in) and then take some bits out can be modeled by a state machine (maybe a non-deterministic state machine if needed). I am just trying to visualize the shape and size of the state machine described by a n qubit register, to put it in terms that I know how to think about.
- Thanks,
- -David
- CSTAR, please see below. --David Battle 16:45, 6 August 2005 (UTC)
- I don't know offhand, but surely you can calculate this by some simple lattice model (640 sites) with a reasonable Hamiltonian.--CSTAR 19:07, 5 August 2005 (UTC)
-
-
-
-
-
-
Okay, let me try to explain (I studied maths and CS, not physics). A digital computer has a finite number of states (2n for n bits), but a quantum computer has an infinite number of states. For instance, the state of a 1-qubit computer is described by one complex number, which is two real numbers (the real and imaginary part of the complex number). Since a real number has an infinite sequence of digits after the comma, it can never be stored exactly on a digital computer, but you can store an approximation to it as a floating point numbers, like a float in C. If you simulate an n-qubit quantum computer, you need 2n+1 − 1 real numbers to describe the state, so you can approximate these by 2n+1 − 1 floating point variables and hope that this works. This is explained in the section "Bits vs qubits" in the article.
Now, there is an important difference between classical and quantum computers: you cannot determine the state of a quantum computer. You can only do measurements, which give you some information about the state (and change the state in the process). This is why it is not that easy to determine how much information you can get out of a n-qubit computer. There is probably a formula for it, but I don't know enough physics to make a guess.
In short, classical computers and quantum computers are very different beasts, and most of the standard models of theoretical CS like Turing machines or state machines do not apply. Perhaps it's better to think of a quantum computer as an analog computer. -- Jitse Niesen (talk) 21:35, 5 August 2005 (UTC)
I understand that "hypothetically" a qubit contains an "infinite" amount of information. But saying "there's an infinite amount of information but we just can't get to it" sounds like a cop out ot me. Physicists like to talk about microstates (see the article on Entropy), and it seems to me that this concept might form a conceptual bridge between "quantum computer" concepts and state machines. --David Battle 21:58, 5 August 2005 (UTC)
- I realize that you understand more of it than I thought. I don't know enough statistical mechanics to help you further, sorry. -- Jitse Niesen (talk) 22:23, 5 August 2005 (UTC)
- Ok, I found a paper by some guys are Yale who claim to show that the information content of an N qubit register grows as N^2 [2]. If that is the case I must withdraw my assertion that a 640 qubit register is impossible because there would only be about 409600 bits of distinguishable information in such a register. However, if that is the case, then such a register should be able to be simulated by maintaining only about 409600 bits of dynamic state information, although the (fixed) state transition table could be *much* larger. I am still trying to work out just how large it would be. Stay tuned.--David Battle 16:45, 6 August 2005 (UTC)
- Ok, some more tidbits. I found an article by Dr. Michael Frank of the University of Florida at Gainesville which asserts that it takes one planck unit of "action" to toggle one bit. The units of action are energy times time. So, figure for the typical "unitary operator" one might flip about half of the observable bits. Further, for quantum algorithms to be efficient they need to be limited to a polynomial number of operations (for example this was one of the constraints Shor's algorithm had to meet). Now, unlike classical algorithms, quantum algorithms can't "branch" based on the current state, but rather must simply apply a series of operators to the system, and then make a measurement. So if the observable state in an N qubit register is N^2 bits (see discussion above) then one might picture each possible unitary operator as a "bit mask" of length N^2 being 1 in the positions to be flipped and zero otherwise. So, for an O(N^3) (where N is the number of input bits) algorithm such as Shor's factoring algorithm, the total information content of the transforms to be made should be O(N^5). Now this says nothing about how to simulate such an algorithm with O(N^5) classical bits in O(N^3) time, but it does give one the feeling for the "size and shape" of the hypothetical state machine involved, which was what I set out to do. The total size of the machine for factoring a 640 bit integer would probably be somewhere north of 1 * 10^14 bits or about 2^46 bits or about 8 terabytes if my math is correct. So no danger of it collapsing under its own weight into a black hole. Comments? Corrections? Raspberries? --David Battle 02:00, 8 August 2005 (UTC)
- Ok, I found a paper by some guys are Yale who claim to show that the information content of an N qubit register grows as N^2 [2]. If that is the case I must withdraw my assertion that a 640 qubit register is impossible because there would only be about 409600 bits of distinguishable information in such a register. However, if that is the case, then such a register should be able to be simulated by maintaining only about 409600 bits of dynamic state information, although the (fixed) state transition table could be *much* larger. I am still trying to work out just how large it would be. Stay tuned.--David Battle 16:45, 6 August 2005 (UTC)
If I was a high school student, and I had the question "What is a quantum computer?", I don't think the introduction of this article would help me one bit. And the rest of the article would go straight over my head. Of course, a quantum computer is a very difficult concept, but surely the introduction could be made similar. For example, in lay speak:
A quantum computer works by simultaneously working out the answer of a question with multiple inputs at once. For example, a question might be "Is this the correct password?". The inputs are then every possible password, for example, "aaaa", "aaab", "aaac", ... , "zzzz". A classical computer would have to ask this question once for each possible password. However, in theory, a quantum computer would be able to ask the question for every password at once, and return the correct password as the answer.
Of course, the above description is in practice not true, for reasons that I don't understand to do with decoherence breaking down etc, but at least it gives a reader that has never heard of a quantum computer nor studied any quantum dynamics an idea of what all the fuss is about. I'm not going to edit the artile myself as I don't trust my own understanding of quantum computers, but it would be really nice to see an easy to read and understand description of what a quantum computer is in the introduction.
- Your description is wrong. A quantum computer is only exponentially faster than a classical computer for a very small set of problems. It can't, even in theory, check all passwords at once. In fact, a quantum computer probably wouldn't be faster than a classical computer for everyday tasks like playing Unreal Tournament, since the "clock speed" in most quantum computer proposals is far slower than current classical computers. Anyway, about the introduction... I can't think of a better one, I'm always at a loss to explain these things to people who don't know quantum physics. -- Tim Starling 12:14, August 16, 2005 (UTC)
========
I'll not make comment of whether the "passward" description is right or wrong, I do agree with the modivation that a plain language is needed to attract people to this field. My definition of "people" is broader than being high school students. It includes engineers and even some science majors etc.
Can some one use a very short statement to tell people what quamtum computing is all about? Please do every body a favor by not to talk about things such as "dicoherence" etc. These terms only scare people away (some of these people could have made a lot of contribution otherwise). I believe, being able to use a plain language requires real understanding of the subject. And I am not sure how many people are qualified to do the job. At least it worths to try. It will not hurt if you do not claim your self to be correct.
Spontaneous decoherence as a fundamental limitation
Someone well-versed in QM should incorporate this finding in the article:
Quantum computers that store information in so-called quantum bits (or qubits) will be confronted with a fundamental limitation. This is the claim made by Dutch theoretical physicists from the Foundation for Fundamental Research on Matter (FOM) and Leiden University in an article recently published in the journal Physical Review Letters.
A quantum computer can only function if the information exists for long enough to be processed. The so-called coherence of the qubit ensures that the quantum information remains intact. The researchers have now discovered that the coherence spontaneously disappears over the course of time and with this the stored information as well. This could pose a considerable problem for the development of a quantum computer.
Gyan 21:53:40, 2005-08-25 (UTC)
- The paper only states that measurement causes decoherence, which seems obvious to me, after all that's the whole idea of measurement. The game is to keep your system isolated from the measurement device while it is evolving, then to only measure it once it has done its work. Here's an enlightening quote about their model:
-
- At time t = t0 we instantaneously include qubit a(b) in the Lieb-Mattis (infinite range) interactions of the spins on the B sublattice of the Lieb-Mattis machine. We then follow the exact time evolution of the coupled N + 2 particle system at t>t0
- They discuss this interesting measurement model and put an upper bound on decoherence time, but I fail to see any threat to the viability of quantum computing. After all, practical quantum computer designs are informed by experimental decoherence times, which are necessarily shorter than any theoretical limit. -- Tim Starling 07:29, August 28, 2005 (UTC)
-
- Again, a disclaimer that I'm not well-versed, so take it easy, but the PR goes on to state, "Much to their surprise they discovered that the coherence tends to spontaneously disappear, even without external influences.". Isn't a 'measurement' an external influence? - Gyan 23:50:04, 2005-08-28 (UTC)
-
-
- Well yes, you'd think so. The press release does beat it up a bit, but I guess that's what press releases are for. It decoheres without influences external to the measurement device, but the measurement device is indeed external to the qubit. -- Tim Starling 00:25, August 29, 2005 (UTC)
-
-
-
- Maybe I should add another word of explanation. A decoherence time of one second doesn't mean the quantum computer has one second to do all its operations and then that's it. It means the operations have to be fast enough such that quantum error correction can be implemented with a reasonable probability of success. One second is a decoherence time which solid state physicists can only dream of, usually you'd expect it to be on the order of microseconds. Gate operations have to be much faster than this for long-running calculations to be possible. -- Tim Starling 00:38, August 29, 2005 (UTC)
-
-
-
-
- Can you explain what the researchers were much surprised by? - Gyan 06:07:59, 2005-09-01 (UTC)
-
-
-
-
-
-
- Let's take this to private email. Tell me if you didn't receive my message, user-to-user email is unreliable. -- Tim Starling 01:21, September 2, 2005 (UTC)
-
-
-
quantum algorithms
Often, people say that there are only two useful quantum algorithms, one is Grover, the other is Shor. I think it is false. It is true main ideas of quantum algorithms can be represented by these two.
But, Shor's algorithm is now generalized to Abelian hidden subgroup problem (or period finding problem),
and some extention to non-abelian hidden subgroup problems.
As an example of the latter, we can name Holgren's algorithm for pell's equation, which is useful(?) to crack some PKCs.
Also, it is important to note that the key part of Shor's algorithm is so-called 'phase estimation', or estimation of an eigenvalue of given unitary matrix. This part can be applied to counting problems, realizing square-root speed up.
Also, gorver's algorithm is applicable to amplify success probablity of many classical & quantum algorithms, as Brassard, Hoyer,etc had pointed out. (this point was already made in the article, but my point is that the range of application is much wider )
In the end, I'd like to point out that quantum walk now supplies a new tool-box to realize square-root speed up of many of classical algorithms based on markov chain.
Cosmic Implication of Quantum Computers
How does the wave function of the universe differ from a quantum computer? Weren't ALL the particles in the observable universe originally entangled? When did the first "decoherence" take place, and what triggered it?
If I understand the theory at all, we are trying to build a quantum computer that can be "programmed" to detect things like prime factors of very large numbers. How do we know we aren't living in a cosmic quantum computer that is "programmed" to detect "observers"?
Table is wrong
The table in the section Bits vs qubits has glaring arithmetical errors: The absolute value of the complex numbers in the second column cannot possibly be equal to the numbers in the third column. To fix this, pick eight angles
and replace the kth row of second column by
--CSTAR 20:04, 11 November 2005 (UTC)
-
- Oops MY BAD, take that back... table is correct --CSTAR 20:09, 11 November 2005 (UTC)
Computational models category
Should this be put into the "computational models" category? - JustinWick 23:31, 16 November 2005 (UTC)
I work in Biomolecular NMR and came upon this accidentally, I was interested to read that there is some possibility of using solution state NMR as a quantum computer, is this really viable and is anybody working on this, the solid state (KANE) computer does seem very promising. How long does a solution system need to remain coherant for even a single quantum calculation to take place? is coherance acheived via rf pulses or by some other means, in NMR the system remains coherant for <10-3 Second is this long enough for a calculation?
Quantum byte!
Here: http://www.i4u.com/article4684.html
Other uses of quantum computers
I feel that the section "The power of Quantum Computers" overlooks some of the important possible applications of a quantum computer. This section seems to focus mainly the applications of Shor's algorithm to cryptology, and Grover's search algorithm. A short mention on a quantum computers use with regard to quantum simulations is made.
I would add two more important (theorised?) applications. Firstly, the possibility of quantum computers gernerating not pseudo but truly random numbers; and secondly, the phenomenon of entanglement and quantum teleportation. -Ray 28th Jan 2006
- Conventional computers are already able to generate truly random numbers. See Hardware random number generator. -- JS, 29 Jan 2006