My simplest answer is: it is not a proof. However, I decided to write because usually people focus on what they try to express instead of looking at the broader implications. The broader implication of the opposite statement is sketched below.
I consider a pure absurd to even talk about parallel universes in the context of QM. If we take literally what it means it says that the probabilistic outcome is always realized. Does it mean by any chance that any probabilistic problem/event should be looked at with the same scrutiny? Why in playing a roulette people are not invoking parallel universes saying if they only were in the right one they would be happy winners.
In probabilistic interpretation of QM this is OUR CHOICE how to interpret it, not an inherent/objective, attribute of the world around us. If we do believe in physicality of wave function reduction why we are not talking about the MegaMIllions reduction of reality. Is it only because we are arrogant scientist that speak a special (invented) language so nobody besides us can understand it. This is just silly. So, I am siding with George on this issue.
No it does not - if it were so easy to connect the meta-physics to the physics all the ontological issues in quantum mechanics would have been cleared up years ago.
I think quantum computing proves conclusively that the system is in a superposition of states.and that the wavefunction collapses - the antithesis of the many-worlds interpretation.
Not really. I am just trying to make a point with that statement. Quantum Computing, like any of the quantum mechanical phenomena we've see in the past 100 years, is subject to interpretation. I think that you can use any of the physical results from the past 100 years to support any metaphysical position you want to take.
The Many-Worlds Interpreation is tempting in this case because it's easy to use this to explain what's "really" going on in quantum computing. But that is not the same as proving that those worlds exist.
Doubtful. It is more likely to be proof of subtle, or not so subtle, intellectual bias. I happen to agree with your suggestion of one-universe entanglement as one possible alternative. I am not ruling out parallel universes, or parallel systems of some sort, but saying they seem far from necessary to explain quantum computing.
On the other hand, quantum computing does tend to point to the highly parallel nature of processes within our universe! It may be useful to question how, for instance, every electron in the universe that share identical quantum states are truly identical amd are not just not just similar. In that sense, quantum computing trends to point to tight, non-local sharing within the universe itself.
My simplest answer is: it is not a proof. However, I decided to write because usually people focus on what they try to express instead of looking at the broader implications. The broader implication of the opposite statement is sketched below.
I consider a pure absurd to even talk about parallel universes in the context of QM. If we take literally what it means it says that the probabilistic outcome is always realized. Does it mean by any chance that any probabilistic problem/event should be looked at with the same scrutiny? Why in playing a roulette people are not invoking parallel universes saying if they only were in the right one they would be happy winners.
In probabilistic interpretation of QM this is OUR CHOICE how to interpret it, not an inherent/objective, attribute of the world around us. If we do believe in physicality of wave function reduction why we are not talking about the MegaMIllions reduction of reality. Is it only because we are arrogant scientist that speak a special (invented) language so nobody besides us can understand it. This is just silly. So, I am siding with George on this issue.
Deutsch's motivation was very powerful, and allowed him to take the first important steps into quantum computing. Although I believe that many-worlds is the only consistent way to understand quantum physics, I don't think that quantum computing proves it, in fact I don't think we have any definitive proof.
I disagree @George Fitzgerald, however, in that I don't think that measurement in quantum computing disproves many worlds. See Max Schlosshauer's lovely review article ("Decoherence, the measurement problem, and interpretations of
quantum mechanics" Reviews of Modern Physics 76, 1267 (2004)) for a description of how something like measurement works in a many-worlds setting (he's not the first to say it, but this is what finally convinced me).
Finally, there's quite a nice paper from around 2000 by Andrew Steane: arXiv:quant-ph/0003084 "A quantum computer only needs one universe", which goes into some of these details.
In short, you can do all of quantum computing with the standard decoherence/measurement approaches, so you can't say that quantum computing offers a proof (or disproof).
In response to the comments of @Bog, I personally think that entanglement theory leads us precisely to the probabilistic arguments you are making. But I can't prove it, and so I have to disagree with you philosophically and not on a matter of physics. I hope that one day, this argument will be settled, but until then you are certainly entitled to your opinion! Nevertheless, in a roulette game, according to many worlds, there would be an (almost) infinite number of universes where one player wins (why so large - simply because every possible measurement outcome for every interaction would go into it). For the longest time I wouldn't believe this based on conservation of mass/energy type arguments - but once you start looking at the effects of entanglement you (may) realise that the processes at the macroscale that you worry about are precisely the same interactions at the microscale that we observe.
And so we come down to a very simple point. Either measurements occur, in which case we need new physics to explain them (measurements are not explained by canonical quantum mechanics). We may find it, and if so it may be that it lies in a deeper understanding of perhaps quantum gravity, complexity or consciousness, or something else. I doubt this as I don't think that we are missing any major chunks of physics in the formalism of quantum mechanics, but I would be overjoyed to be proven wrong. Or there are no measurements, and there is no way for quantum systems within the universe to effect wavefunction reduction. This would lead inevitably (in my opinion) to many worlds type approaches.
Note that many worlds is really misleading. We (or at least I) are not talking about distinct universes. We are talking about quantum states that are no longer able to interfere with each other because the amount of entanglement with other degrees of freedom prohibits the kind of interference that we see in, for example, a double-slit or quantum-erasure type experiment. So there is really no need for matter to be created (as long as entanglement is limitless - and it appears to be).
From my perspective, what we really want our interpretations to give us is a consistent understanding and prediction of what we observe. So we want quantum interference at the micro scale and we don't want to observe interference of classical-scale objects (unless we work _really_ hard). Many worlds provides that. However I agree that it is unsatisfying in that we have these quantum systems that continue to evolve, and there is no way to interact with them. C'est la vie.
I am glad that you admit that most of this small disagreement is a matter of semantics or just simple religious belief. What I was stressing in my answer (with some exaggerated language) is that we do not need to believe in the reality of quantum states in order to get the real answer (mysterious) measurement. I wanted to stress that we do not have a clear understanding of much simpler notions like the dualities and the relation between limited speed of light c and limited size of action as expressed by h (Planck constant). All the classical physics is based on a simple assumption that mathematically definable measurements allow for infinite divisions (like epsilon nets in the theory of measure) which is our habit of using calculus in terms of infinitesimals. The quantum universe clearly produces a stochastic outcome because at the certain size of the quantum cell we cannot define (or assign a value) a variable, what we can only say is that something is inside or outside the quantum cell. Such statement leads to a stochastic interpretations without any need to resort ourselves to multiuniverses. The quantum cells are always defined by a pair of dynamical dimensions (like x and p) so you have entanglement because there is always a possibility that one of the dimensions overlaps with another object that from quantum statistics is indistinguishable. So, in essence, quantum probabilities are the results of regular classical stochasticity.
You have to excuse my ramblings but it is very difficult to be clear in absence of a commonly accepted language. Let me try the other angle. People have a tendency to believe that space exist separately and in the background. What is the space is constantly created and changes with what happens. Entanglement can be understood in terms of the dimensionality reduction where we loose one dimension with an act of measurement.
Ah yes, the common language - I remember that we had that discussion somewhere else, did we not ;). Anyway, I do like the fact that we are now united by different 'religions'?
That is mostly correct, the beliefs always determine what we study or what "exist", The problem of how a photon one kilometer in size (as measured by a wavelength) can be still a quantum object is somewhat mysterious unless you invoke quantum cells. An object of such a length would have a low energy therefore low momentum and as such would be much more classical. The higher frequency (shorter wavelength) the more quantum it would become. Is it by any chance the intuition that we are left at from school. So the real test would be how much entanglement we can squeeze out of semi-classical photons (let's say 10000 km ).
I agree that understanding the quantum to classical crossover may help us with understanding (or even resolving) some ontological issues in quantum mechanics (such as many worlds vs Copenhagen vs objective realism vs other interpretations). On a related note - the measurement problem also needs addressing. However, it is not clear to me that one can make progress in this direction while retaining all the usually held axioms of quantum mechanics. Until one model/ontology can make quantifiable predictions at odds with another we can make no progress and all meta-phiscial models (no matter how absurd they appear to us) are candidates.
In addition I do not agree that the length of a photon is sufficient to ensure a classical crossover. Please see my paper "On the correspondence principle: implications from a study of the nonlinear dynamics of a macroscopic quantum device" http://iopscience.iop.org/1367-2630/11/1/013014/. While I am no longer entirely happy with my conclusions, this paper discusses some issues with usual expressions of the correspondence principle and you may find it relevant.
Given your interest in foundations you might also appreciate "Quantum measurement with chaotic apparatus" http://arxiv.org/abs/0905.1867.
I would also in this context highly recommend a series of papers by Dieter Zeh (published as 'Physik ohne Realität - Tiefsinn oder Wahnsinn?') unfortunately not yet translated into English
It is somewhat remarkable in this context that in 1986 I finished my PhD thesis and 1/3 of this thesis was devoted to the same subject as your first paper. (Unfortunately it is in Polish). I solved a double well quantum problem and concluded that it was much more difficult to get to the classical regime in SQUIDS than people thought (in particular A. Leggett ) at that time (I guess today too).
There is no discussion that the length of a photon cannot be a determining factor for classical quantum transition but it was a useful yardstick that got your attention. I used it only because we had a heated discussion with Andrew on the other thread about photons. My thesis was that we do not exactly have a good model/image of a single photon, but my thesis was that a single photon just cannot be a wave (in a classical sense with valleys and hills and well defined length and frequency). So, in this context the natural question is whether one can recover (produce) well entangled object that are hundreds of kilometers in size.
Very interesting - I wish I had had a chance to read your thesis it sounds like you could have saved me some work! I have always held that Leggett's views on the quantum to classical transition (i.e. macro-realism) were too simplistic. I am also of the view that SQUIDs will become one of the macroscopic quantum systems of choice for QIP applications. Given your background you might like our recent preprint (http://arxiv.org/abs/1212.4795).
Can I ask for some clarification on your second paragraph. I do not to use the term entangled when dealing with single modes/photons. In this case I would stick to "a macroscopically distinct superposition of states" or Schrödinger cat state. Here the non-classical nature of the field can be well understood intuitively through the system's Wigner function that shows non-classical correlation's between different phase-space separated parts of the superstition. I reserve entangled for quantum correlations between non-seperable states in a tensor product space (e.g. Bell states). I would expect the former would be susceptible to environmental decoherence in different way than the latter (although I am not sure quite what form these difference would take - still not had breakfast and I am still warming up for the day). In terms of the quantum to classical crossover of a field mode you may also be interested in http://pra.aps.org/abstract/PRA/v79/i3/e032328 (it turned out to be a little harder than I had anticipated for us to recover a convincingly classical field in this model). I do not know realistic decoherence parameters for a single large cavity mode but I suspect that is one could couple such a mode strongly to a two level system using a Jaynnes-Cummings type setup then there might be some hard to get rid of quantum signatures in the field.
(I hope that other reading this thread will not mind that we have strayed from the original question)
I also recall my views on Leggett's work in a similar vein but he got a Nobel for them.
My general views crystallized a bit more and I posted some remarks on two other threads about it. Particularly these in "What space is made of?" and "What are the most promising theories of entanglement...?" that might be of help in here. My simple observation was that the language (math) we use for describing the QM phenomena does not fit the phenomena. In a sense, the use of continuous functions (fields or wave functions) is mismatched with the quantum (or discrete) phenomena. They are discreet not because of a mysterious requirements for having quanta of energy but because our understanding what space is in quantum world is incorrect. This is the space that gets quantized not the functions. This is like in the "theory of measure" that instead of going with a series of infinite divisions you suddenly stopped and the smallest ruler available was too big to describe what happens in this particular part of space. This quantum cell is somewhat curious as it does not contain space dimensions alone (x,y,z) but must contain dual variables (Px,Py,Pz). In a sense we do not fully understand the notion of movement that is defined by the canonical variables.
So in essence I do not believe that the language of fields can give us anything more than we already extracted from it. We just simply keep discovering new transformations of one approximation to another without discovering the language adequate for the description of the phenomena itself. This is a similar situation to our use macroscopic equations such as Navier-Stokes to understand the liquid on the level of an individual molecule or atom. It is much better to use molecular dynamics for it despite all the shortcomings of the latter. But it is relatively easy to recover (by averaging) the fields flows form MD simulations that reproduce NS equations.
Quantum computing must be realized with this kind of representation in mind (discreet space).
Answering your other suggestion about SQUIDS being good representations of quabits I think that electrical addressability makes them good candidates for such a role but I personally believe that we will discover materials (analogous to semiconductors) that can realize it on the atomistic level. Realization of quabits requires mixture of different space applications (like in SQUID) you have to mix electronic states (current) with magnetic states. I am thinking about something simpler like in the one or two dimensional systems the electrical would control the signal and another electrical converted to magnetic or mechanical (like piezoelectric) would control the state of entanglement. In such a system a simple electrical signal would switch off quantum states by eliminating entanglement and allowing for higher level computation.
Just a quick note while I think about your full message. I recall that Leggett got his Nobel prize for his work on superfluid He (which was well deserved) not macro-realism. Quantum does not necessarily imply desecrate as some observables have continuous spectrum. But I will need to think a bit before I can well respond to your comment.
You are obviously correct on the issue of Anthony getting his prize for superstates. However, the later activities never hurt. For instance Jack Szostak got a prize "for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase" while in reality his work on ribozymes and origin of life attracted much more attention and curiosity. I just love this man inquisitiveness.
Coming back to your remark about continuous spectrum. Read carefully what I wrote about conjugacy and duality of variables. This problem of discreetness and quantization appears only when a definition of motion (generally change, i.e. time) comes into place which is in all conjugated/dual systems. The quantum cell is like piece of jello both independent variables are continuous unless somehow they gang up and become dual. This is in my opinion why this problem is so confusing and why our approximations using continuous functions work so well.
I've conducted the first table top experiments that prove parallel universes from QM are real. The press release with a link to the paper, photos and an embedded video news reel with analysis of the experiment is here - https://www.prlog.org/12613870-new-similar-experiments-dramatically-achieve-rainer-plaga-suggestion-to-prove-parallel-universes.html
Let the quantum computer work first in the anticipated way before coming to any conclusion about MWI...
I think there is a lesson of billions of dollors waiting just to hint that one simply can't harvest the non-understood physicsl phenomenon i. e., the single quantum behavior (because, QC involves operations on qbits).
If at all it works, i. e., start producing an output, then it will never yield any solutions to the expected problems so far (except ocassionally with a probablity which may be less than the life of an ant w. r. t the life of the Cosmos).