The British astrophysicist, A.S. Eddington wrote (1928), interpreting QM, "It has become doubtful whether it will ever be possible to construct a physical world solely out of the knowable - the guiding principle of our macroscopic theories. ...It seems more likely that we must be content to admit a mixture of the knowable and the unknowable. ...This means a denial of determinism, because the data required for a prediction of the future will include the unknowable elements of the past. I think it was Heisenberg who said, 'The question whether from a complete knowledge of the past we can predict the future, does not arise because a complete knowledge of the past involves a self-contradiction.' "
Does the uncertainty principle imply, then, that particular elements of the world are unknowable, - some things are knowable, others not, as Eddington has it? More generally, do results in physics tell us something substantial about epistemology - the theory of knowledge? Does epistemology thus have an empirical basis or empirical conditions it must adequately meet?
Jerzy, Ray Streater is relatively well-known as a spokesperson for a community of people who do not believe in any shade or stripe of wave function realism (we're not even talking of wave function monism here.)
It is then already a matter of interpretation.
Sane & sound people will rather convincingly argue that a Schrödinger equation legitimately can apply to several variables, and that there is ample experimental evidence for that - which is however basically what Streater disputes if we follow his line of reasoning.
What I'm trying to say is that this line of argument is very far from being cut and dry.
Heisenberg's uncertainty principle is sometimes incorrectly understood to be saying more than it actually does. It's true the principle precludes us from simultaneously measuring some specific pairs of observable parameters (such as the position/momentum pair) in a quantum system, but the same caveat applies to *any* system that is described by wave equations - not just to quantum mechanics! My point is that indeterminism in quantum systems goes far beyond Heisenberg's uncertainty principle, and is perhaps better justified by the various no-go theorem's (Bell's theorem, Kochen-Specker theorem, Free-will theorem).
As for the question, uncertainty (in a broad, nontechnical sense) does not invalidate epistemology, but rather make it necessarily probabilistic: http://plato.stanford.edu/entries/epistemology-bayesian/
For a more concrete example, Solomonoff induction offers a mathematically justified how to setup initial beliefs "from nothing" in a probabilistic universe:
http://www.scholarpedia.org/article/Algorithmic_probability
Dear Elsts, thanks for your reaction and the links included. Rather than telling us how Heisenberg's uncertainty principle might be incorrectly understood, I believe it would help along the question if you could say more about how it is correctly understood--especially in relation to Eddington and epistemology. I take it that Heisenberg's principle applies to many pairs of measurable quantities, but that does not seem much to the point regarding the Eddington quotation. Certainly, I am aware that the Bell inequalities undercut hidden variable theories, e.g., but that just brings out the broad character of Eddington's claims for an influence of physical results upon epistemology --or so it seems. Given Heisenberg (and Bell), haven't we now justly quit looking for hidden variables? Does physics inform epistemology?
Clearly, there is no avoiding probability and probabilistic results in science. But, no one is saying here, as seems clear, that uncertainty "invalidates" epistemology. (Not Eddington, not Heisenberg, and certainly not me!) On the contrary, the question is whether and how results in physics, and the Heisenberg principle in particular, may influence or condition epistemological scruples or principles.
Interesting idea, you mention, of "how to setup initial belief from nothing," I must say that it sounds rather Cartesian to me--with the tendency to view epistemology as purely mathematical and a priori, perhaps? Wouldn't that amount to rejecting the prospect of justifying epistemological claims on the basis of results in physics? I wonder, what you make of Eddington pointing to "A New Epistemology (that is the title of the section, from which I took the quotation) on the basis on Heisenberg uncertainty (and/or other similar) results. You don't really say.
H.G. Callaway
Dear HG, I would also agree that Heisenberg tells us exactly nothing about epistemology.
Heisenberg's only describes, at bottom, fully normal sets of circumstances that do not in the least impair knowledge - for example, the color of something and its surface area are Heisenberg conjugates - it stands to reason that if you dwindle a surface area down to a vanishing point, this point's color will become indistinguishable: it does not however obviate separate knowledge of either surface area or color under different, equally valid circumstances. Same with position and speed, etc.
There are however things that could be construed as impeding knowledge *in principle* and therefore have relevance to epistemology. Two examples ? One is Gödel's incompleteness. Stripped of its bells and whistles, what it means is that no system can ever be known from within that system: to fully apprehend something, you have to position yourself outside that something (be it mathematics, or the Universe itself, etc.) This gives rise to never-ending recurrence relationships at every step of your attempting to acquire knowledge, which becomes a "receding mirage". Interestingly, the second example that springs to mind would be infinities. There is demonstrably a infinity of infinities (if you forgive me, for the time being, for this demonstrably inherently imprecise statement, which in turn you could only further improved by saying 'an infinity of infinities of infinities, etc.) - e.g. the suite of aleph numbers, and many more besides. You can demonstrate that infinities are for real - and that they can never be apprehended fully ....
Dear Ransford, Thanks for your argument. You take a clear stance here to the effect that Heisenberg tells us nothing about epistemology. I'm much inclined to disagree, though.
I agree with your points, however, about mathematical results, such as those of Godel, "impeding knowledge in principle," as you put it. I think that Eddington is trying to make a somewhat similar point, based on Heisenberg's results. Whether he has quite made the needed point may certainly be doubted. Let me try to expand on what Eddington is getting at in the quotation.
No doubt both position and momentum of a particle, say, an electron, can be measured, or determined by observation. What you cannot do is to measure both the position and the momentum at the same time--within limits which are a function of the Planck constant. The more precisely the one is measured, the more uncertain the other becomes. So, there is an expected "element of knowledge" here which is ruled out in principle by the uncertainty principle--thought either position or momentum alone can be measure with great precision.
Some have suggested that this is a matter of a condition similar to that you emphasize above, that to "apprehend something, you have to position yourself outside that something." The first measurement creates a new configuration, which can in turn be observed, but requires attention to the configuration created by the first measurement --so the double measurement of the original configuration is unavailable.
A further interesting implication would seem to be that there can be no precise location in which is there is exactly nothing! (There must at least be the quantum fluctuations.) The point may need some greater precision, but I suspect it will stand; and if so, then it seems a very interesting kind of point. If there is no precise location where there is exactly nothing, then, surely, we can't know of such a location.
If the uncertainty principle tells us that there are certain things we cannot know, then this is apparently a genuine constraint upon knowledge, and must be taken into account in some fashion in epistemology--our theory of knowledge. One kind of view, traditionally related, is a positivist view of meaning: what cannot be determined by experiment and observation is non-sense. I think that goes too far, but it illustrates the kind of conclusion which has sometimes been drawn. If you dispute this positivist view, I submit, you are still doing epistemology, and this is a way of putting Eddington's point--though I am not completely happy with what Eddington actually says above.
H.G. Callaway
Well, to begin, we must remember that Heisenberg never wrote about - sorta: "The uncertainty principle says…"; or "This is the uncertainty principle", etc. Nowhere in his works (I have studied most - not all of them) does he say that.
It was the tradition that created the so-called uncertainty principle as such.
Besides, what Heisenberg does say is that we cannot know at the same time the position and the momentum of a particle. All this happened in the framework of the Copenhaguen debate.
Furthermore, as Chris clearly points out, the uncertainty principle does not say anything about epistemology.
In any case, the bottom-line is: the uncertainty is not a human, epistemological or cognitive (not to say, even less, a psychological trait), but it is the very nature of reality, regardless of we - ourselves- that matter behaves that way.
In any case, Heisenberg's idea(s) have meant a crucial inflection in the history of manind and science.
Heisenberg didn't utter any epistemological claim. That does not mean that his uncertainty relation hasn't any epistemological power. The uncertainty relation evolves from a wave model that has experimental adequacy. Heisenberg didn't pretend that we cannot get complete knowledge but that within the uncertainty of the wave mechanics position and impetus of a particle cannot be determined at the same time. So this physical phenomenon behaves as a particle (by its energy) but at same time as wave.
In his Physics and Philosophy (1959) Heisenberg has philosophized
quite a bit on the meaning of quantum mechanics from the point of
view of his time, under physicists being mainly the logical
positivist/empiricist view. Since that time it has been realized,
also within quantum mechanics, that this latter view is problematic
as a result of the problem of theory dependence of observation
statements (see, e.g. F. Suppe, The structure of scientific
theories (1977)). It turned out that the Heisenberg uncertainty
principle, which is at the heart of the quantum mechanical
measurement problem, is a paradigmatic example allowing to study
epistemological issues (see e.g. Willem M. de Muynck, Foundations
of quantum mechanics, an empiricist approach (2002); also
http://www.phys.tue.nl/ktn/Wim/muynck.htm ). In particular does
this approach corroborate that the Heisenberg uncertainty
principle should be distinguished from Heisenberg's uncertainty
relation, this difference being known already since Ballentine's
1970 paper on The statistical interpretation of quantum mechanics,
but unfortunately largely ignored by the physics community.
This is an intriguing question. Eddington is well known for criticising the naïve realist position behind 'classical schoolroom physics' and making some headway into a more subtle ontology. However, I suspect he did not get very far and certainly not as far as the enlightenment figures like Berkeley, who took things much further (maybe too far). There seems some confusion about 'the knowable' which he relates to macroscopic physics in the quote. Maybe this is linked to the common practice of calling electrons 'unknowables' at that time in philosophy on the basis that we think we cannot see electrons. But in another sense all we see is electrons. You cannot see an elephant without seeing the electrons that send us the photons. Is Eddington conflating different problems here? I suspect Eddington is right to suggest that our understanding of knowing could do with some tidying up but I am not sure it is so much about what is knowable and what is not knowable but rather about what we mean by knowing and what there is to know. As indicated by others Heisenberg is about what there is to know, not what we 'cannot know'.
Maybe something skated over by Eddington is the difference between knowing tokens and knowing types. If we want to know about all the dynamic tokens in the universe we are dead in the water - no chance. If we want to know the rules for all types, which may be of infinite variety but fully specified by any combination of a finite set of rules then we may be able to do pretty well, and even finish the job. Knowing is mostly about inferring the type of pattern of dynamic disposition that would explain observations. Our brains have evolved to deal with types so that we can predict and survive. Certain values for types in QM are known to 18 decimal places.
In relation to Heisenberg, my understanding is that his principle indicates a simple property of complex waves, as indicated above. There is nothing very new about what he says. It seems new because it is in a context of a new theory that for the first time in technical terms addresses the paradoxes raised by having a set of dynamic rules based on continuous variables that in our universe is expressed by discrete dynamic units. The comfortable idea of Democritan atoms finally has to be binned. The problem of how to share out dynamics symmetrically and continuously without being allowed the infinitesimal has to be faced up to. To my understanding, however, the need to face up to this problem is understood by Leibniz, which is why he proposes a 'strange' system of dynamic units that are not entirely deterministic, have no size or shape, and otherwise match up pretty well with modes of excitation in quantum field theory. So in my view Heisenberg was simply pointing out a limit of token dynamic relations that Leibniz had told us about but which most people had ignored because they could get away with it. On the other hand I rather suspect that people like Maxwell and JJ Thompson would have been aware of these issues, even if they had no particular need to worry about them.
Dear Maldonado et al, We have several good replies to the question here, though, of course, they don't all agree on the answer to this question. I'd like to first look a little closer at something that you say, Maldonado,
"In any case, the bottom-line is: the uncertainty is not a human, epistemological or cognitive (not to say, even less, a psychological trait), but it is the very nature of reality, regardless of we - ourselves- that matter behaves that way. "
Here it struck me that you were thinking of epistemology as the theory of mind, and though epistemology is sometimes taken that way, equating it with theory of mind, there is a normative aspect of epistemology which resists the equation, and instead sees epistemology as, at most, a constraint on theories of mind. Agreeing, then that Heisenberg's uncertainty principle is a matter of physics and not psychology, say, it does not follow that the physical results such as Heisenberg's cannot properly inform or influence epistemology proper --as others on this thread have suggested above.
As Verstraeten puts the matter:
"Heisenberg didn't utter any epistemological claim. That does not mean that his uncertainty relation hasn't any epistemological power. "
Also, I think it of some importance here to see Heisenberg in his relation to
Bohr, as has been noted. It may be that Eddington was reading Bohr in 1927-1928, as much as Heisenberg. But though the epistemological implications of the uncertainty principle are mediated through the Copenhagen interpretation --and beyond-- it still seems appropriate to speak of the epistemological implications of Heisenberg's uncertainty principle. I think Muynck's note captures this point is a well stated manner.
I agree with Edwards where he says, "There seems some confusion about 'the knowable' which he relates to macroscopic physics in the quote." But what I find most disturbing is the contrast Eddington makes between the knowable and the unknowable --he creates an anchor for his mysticism, I suspect. How could the unknowable possibly enter into a theoretical and physical picture of the world? Wouldn't it have to remain outside?
H.G. Callaway
Dear H. G. & friends, Thank you very much for this conversation.
Of course Heisenberg' s idea has had serious epistemological implications. I fully agree on this. In science as in philsophy, an interpretation or a reflection in a field can have - as it has in fact been the cvase numerous times - epistemological consequences. This is in fact a sign of vitality of a given idea.
The great point about quantum theory in general, and Heisenberg' s arguments here, is that uncertainty is devoided of any anthropological interperatation. It is nature (or matter, i.e. light) herself that behaves the way she does, regardless of the position of the observer.
We all know the framework of the debate in which Heisenberg took Bohr's position against Eisntein, namely the Copenhaguen debate. The debate between realism and idealism became in our days the confrontation between internalism and externalism.
The trouble about uncertainty in the way we owe it thanks to, and after, Heisenberg
is that the traditional (for instance Cartesian, or Newtonian, or even farther on: the Platonist and Aristotelian traditions) interpretation about reality, truth, and certainty suffers a wonderful inflection that produced a radical shift in the history of human reason.
Dear Carlos,
I agree with the gist of that but I have to point out that the ontological nature of uncertainty is well understood and stated by Leibniz, who after all did as much as Newton to get physics started. The problem was that nobody could understand how Leibniz's ontological schema could be relevant to real science. Heisenberg merely showed why it was - good progress, but the idea comes from Leibniz. It is a simple logical conclusion from the combination of symmetrical continuous laws of possibility and discrete dynamic units.
Dear Jonathan, I agree with you. Let me please re-phrase your remark, thus:
Leibniz and Newton foresaw nonlinearity. The trouble for them was that they did not have the cultural (I stress this word) apparatus to really cope and work with nonlinear dynamics. This is why both Leibniz and Newton ended linearizing nonlinearity. This is calculus - integral and differential.
The world ought to wait till PCs were developed in order to truly work with nonlinear phenomena. Leibniz and Newton were in this much ahead of their time.
(I leave aside the discussion about Leibniz and Newton' paternity on calculus - a discussion that we all know well enough).
Let us not be distracted too much by referring to Leibniz and Newton when discussing quantum mechanics. Leibniz and Newton were engaged in the ontological question of `what is the seat of reality' embodied by the difference between realism and idealism. In the physical practice of quantum mechanics this latter difference does not play any role as long as a realist interpretation can be attributed to theoretical entities. Epistemology enters the stage when questions of knowledge are considered, like: Is the pointer position of a measuring instrument both ontological (in the sense of being a real property of the measuring instrument) and epistemological (in the sense of representing knowledge on the object, provided by the measurement). In this latter sense the Heisenberg principle as well as the Heisenberg inequality have epistemological meanings. Neither Newton nor Leibniz took the measuring instrument into account. The necessity to do so was only clearly felt within the context of quantum mechanics.
I am not sure that I can agree Willem. The question at hand here is whether aspects of the world are unknowable, as Kant might have claimed. My impression is that Heisenberg is simply formalising something that Leibniz understood: that some of the things we think we want to know are not knowable, not because of some epistemological block but because they aren't there to be known. They are misconceptions about what there is to be known. There are clearly further ramifications to Heisenberg, Bohr and complementarity, but I was wanting to stick to the question.
I am not aware that Leibniz and Newton debated realism and idealism. I agree with Richard Arthur in his 2006 paper with Loptson that Leibniz was not an idealist in the Berkeleyan sense. He thought the world was real, but that reality was composed of purely dynamic units rather than 'particles' and that in some sense this reality was 'in the mind of God', which I think one can handle in an atheistic way without trouble. He did not think reality was in the mind of a man. He was very interested in the dual accounts which we now call the quantum and classical levels, which were understood even then for light (the travels in straight lines account and the shortest distance between two points account). He was clear that the classical trajectory account was something that only applied to aggregates and was not a fundamental account of dynamics. In the fundamental account there would be no position or momentum of a particle because there were no particles. He had no way of knowing what the fundamental account would turn out like (other than for light) but I think he understood why there might seem to be unknowables if one stuck to the aggregate account because he realised that at the fundamental level there was ontological indeterminism.
Dear Edwards, What Eddington actually says above is,
"It has become doubtful whether it will ever be possible to construct a physical world solely out of the knowable - the guiding principle of our macroscopic theories. ...It seems more likely that we must be content to admit a mixture of the knowable and the unknowable. ..."
So, I take it this is a question about physics, and QM in particular. Do we have to admit the "unknowable" in physics? Kant does indeed allow something "unknowable," but not in physics --I believe.
What is attributed to Heisenberg is,
'The question whether from a complete knowledge of the past we can predict the future, does not arise because a complete knowledge of the past involves a self-contradiction.'
I think we can wonder what would be included in a "complete knowledge of the past." This is certainly suggestive of something unknowable, as with Eddington. But the idea that complete knowledge of the past involves a "self-contradiction," in some contrast, may suggest that something expected is not there to know. If so, QM seems to tell us something about knowledge.
It has been argued that Heisenberg, in coming to the uncertainty principle, saw himself as making an argument like that of Einstein. It is not that when light from a star comes past the eclipsed sun, that it curves--although there is another straighter line which it might have taken. That expected line is not there in the curved space-time. (I think we might say, it would take a great deal of energy to put such a line in place.) Likewise, Heisenberg came to the conclusion that the simultaneous location and momentum of a particles, as expected, was not there to be measured.
Our prior scientific suppositions had created expectations about possible knowledge which QM shows incorrect. Yet prior epistemology was surely adjusted to this expectation in some sense --as with Kant's epistemology involving space and time as a priori forms of intuition.
Like de Muynck, I am somewhat doubtful on the relevancy here of your discussions of Newton and Leibniz. Your point seems to turn on some special interpretation of Leibniz. Perhaps you could say more on this?
H.G. Callaway
Dear HG,
There are lots of layers in here. Physical and physics are not directly related in meaning. But even if we ignore 'physical' I think 'physics' must be 'trying to construct the best account of what is really going on'. Time and again it has transpired that this requires looking beyond any 'established framework', as with QM, relativity, Higgs, etc etc, to questions of ontological significance. Kant's unknowable is outside physics simply because it never existed. If his thing in itself was a reality then it would have been quantised by now. All knowledge is inference. If you can infer an 'unknowable' you know something about it - its existence. Its all downhill from there until you realise 'physics' excludes unknowables a priori (I think).
What I worry about the Heisenberg quote about the impossibility of a complete knowledge of the past is that this is a truism that need not have anything to do with Heisenberg's principle. It is to do with the structure of knowledge in practical terms - and the token/type issue. If Heisenberg really meant that we cannot know the past because we cannot know both the position and momentum of particles then he would be saying the opposite of what you and I think he thought. If there is no fact of the matter about values beneath the limit given by h then there is nothing missing when we want to predict the future, surely?
So I think we are talking about the same question, and I think Leibniz is relevant. We are, I think, talking about the fact that although Heisenberg seems to have suggested that there are unknowables, such as the tenth decimal place on both the value for position and for momentum, he was probably suggesting that what we thought was something to be known was not there to be known, like the straight line around the sun.
The relevance of Leibniz is that he understood that the mechanical account of classical physics posited events to be 'observed' which did not in fact exist at the fundamental level, just like Heisenberg. He said that if you take apart a piece of matter and extract one of its 'particles' you will find it has no size nor shape nor anything of that sort since it is not a particle in the sense of an object but merely a unit of 'entelechy' or 'force' or as Heidegger offered 'drive' (Drang). It would not occupy a 'physical point' and thereby have a position. Rather it would be a 'metaphysical point' or point of view.
The problem is that, in part due to the Bertrand Russell's inability to see what Leibniz was driving at, the literature of Leibniz has been dominated by a misconception that this 'metaphysical' aspect has nothing to do with physics. Yet the New System and Specimen Dynamicum where Leibniz introduces his rationale for monads shows that they are based firmly on the internal elastic properties of matter analysed by people like Hooke which had been shown to be the basis of collision dynamics. The 'metaphysical' level is for Leibniz the level of the real dynamic unit. A real dynamic unit in 2014 is a quantised mode of excitation of a field - Heisenberg may have been vacillating but Leibniz's arguments against particles are as I understand it now pretty much accepted by physics. What confuses people is that Leibniz says these dynamic units have perception and are 'mind-like' even if most of them in a trivial way. They exist in the mind of God. But that is just 1690 speak for Einstein's 'the universe always knowing how to obey its rules everywhere'. Moreover, Leibniz is really only interested in the 'active' ones. He knows that animals are active so he gives them active units. He also knows that the units *inside* a marble tile are active because they make a ball bounce off it. But he is not impressed that the tile as a whole is active so he does not give the whole tile a dynamic unit. The Nambu-Goldstone theorem suggests that he is wrong on this. Leibniz's interest in the mental as well as physics led to his dual description theory to be called 'psycho-physical parallelism' which is a travesty of what he meant (as pointed out on the Stanford Encyclopedia site on his P of mind). Leibniz scholarship has been in the doldrums for a while but with people like Garber, Phemister and Arthur I think things are moving forward significantly. There are still disagreements but there is a lot of room for synthesis.
I admit that my interpretation of Leibniz is at the edge of the range. However, I do not think it is outside the range. I am not a professional philosopher but have studied Leibniz for ten years in detail. Most of the points made above you will find in the paper at http://www.humanities.mcmaster.ca/~rarthur/papers/LeibnizBodyRealism.pdf by Richard Arthur, who is a key figure in current Leibniz scholarship. I have just this week presented my position at a Leibniz workshop and there was a lot of common ground. My detailed analysis, work in progress, is at http://www.ucl.ac.uk/jonathan-edwards/monadology. I grant that Heisenberg and others gave a new impetus to the issue but I am not sure that giving this to Heisenberg is more than mythology. JJ Thompson's first model of the atom, based on vortices seems to me to be about as unparticle-like as any. I suspect that the people at the cutting edge always understood these issues. It is the popular science commentary culture that assumes reinvented wheels are something new!
Dear HG
let us first to quote you:
'.It seems more likely that we must be content to admit a mixture of the knowable and the unknowable. ...This means a denial of determinism, because the data required for a prediction of the future will include the unknowable elements of the past.'
My answer to your question: 'What does Heisenberg's principle tell...' is that Heisenberg's principle says about epistemology less-or the same as classical physics (CP), because it (CP) contains also elements of the unknown, which are included in the imprecise initial conditions:-)
Dear Hanckowiak, I take it that your argument depends on a rejection of an "unknowable" element in physics. In any case, if there is, contrary to Eddington, nothing "unknowable" introduced by the uncertainty principle, to hold to this is not to deny the indeterminacy; and this still contrasts with the unknown elements you mention in relation to classical physics. We cannot measure both the position and momentum of a particle,simultaneously, beyond an accuracy which is limited as a function of the Planck constant. A range of results can be predicted, or a statistical average of results, but there will be no predicting individual events dependent on the indeterminate input. This is a rejection of the the determinacy of prior Newtonian, and Einsteinian physics. Eddington does accept the indeterminacy, though (as I've argued above), he misconstrues it in the passage quoted.
The prior epistemology, was, I submit, modeled on Newtonian science, with its determinism, and that created expectations of possible knowledge --unknown elements becoming known, which are disappointed by the Heisenberg uncertainty principle. So, I think this does tell us something about epistemology. This strikes me as quite similar to the epistemological effect of Einstein's work --with its rejection of Newtonian space and time--along with his modifications of the idea of simultaneity. Or, again, Einstein supplanted the Newtonian definition of force, F=ma, to take into account relativistic effects. Even definitions may fall as science advances. Likewise, the uncertainty principle tells us that particular descriptions, say, involving exact location and momentum of a particle, have no application. There is no such thing there to be known or unknown.
I am curious about how time enters into all this, since the Planck constant is defined, partly, by reference to time. The contemporary figure in 6.6260755 x 10 (to the minus 34) joule-seconds. This is understood as a unit of action. I wonder in particular what this may tell us about time. If we must already understand what time is, to understand the Planck constant, then it would seem that any quantum mechanical theory must employ a concept to time--if the Planck constant is fundamental. Still, I am also aware that there is contemporary resistance to developments of quantum theory (in quantum gravity) which depend on a given or presupposed background space-time. Can the Planck constant be employed without a background space-time?
The Planck constant seems so basic, that I can't quite imagine how any quantum theory could start out without it. Maybe I am wrong about this, though. I'd like to hear from those who may know a bit more about the topic. The role of time in Planck's constant suggest that time is basic to quantum mechanics. Even the uncertainty relation is defined by reference to Planck's constant. So, it appears that Planck's constant, too, tells us something about possible knowledge --or the plausible development of theory?
H.G. Callaway
The answer to this question may depend upon what the status of epistemology is with respect of brain function. For example, if it is found that the brain is not an 'epistemological engine', does this make a difference?
My own research implicates a solitonic tunneling quantum coherent matter wave (BEC) as the neural corellate of consciousness. The formation of the soliton depends upon invariance in the boundary conditions (environment) to provide the necessary 'fixed points' (or points of ambiguity) whereby tunneling can take place. So, cognitive states depend upon the Heisnberg uncertainty principle for their very existence! In this respect, the Fractal Catalytic Model suggests that the brain is not an epistemological engine - it is more accurate to describe it as an 'uncertainty engine'.
Given that uncertainty lies at the heart of cognitive sates would seem to require an analysis of what precisely we mean by the terms 'certainty' and 'knowledge'!
Dear Davia, Thanks for your note which suggests an interesting turn. Your suggestion of a link here seems to turn on the idea of epistemology as theory of mind, or as contributing to a theory of mind. I am not exactly sure what is in question in saying, or denying that the brain is an "epistemological engine," or an "uncertainty engine." I think it might help if you said more, or suggested some related readings. If the brain is an "epistemological engine," wouldn't this be consistent with QM uncertainties in the material and functioning of the brain?
In general terms, though, I think to emphasize the normative features of epistemology, as contrasted with any purely descriptive account of how the mind or brain works. This will involve rules or conditions that we ought to follow or should observe and not merely an account of what actually happens. If QM is generally true, that is, is a valid account of physical reality, generally--which I assume it is--then, of course, we would expect that quantum mechanical uncertainties, or indeterminacy must be instantiated all around us, and in the brain, too. You suggest how this might enter into an account of consciousness. Still, I think you need to say more to make this plausible and to further relate it to on-going theme and sub-themes.
I also think to ask how your talk of a "neural correlate of consciousness" relates to content and intentionality, or the "aboutness" of consciousness --and cognitive thought or processing in particular. How would talk of "tunneling" help here, even given something "fixed," as you say, as boundary conditions--in perception, perhaps?
Interesting suggestion, and many questions!
H.G. Callaway
Lots to say - but first things first.
To clarify a point. The reason that I think that brain functioning matters with regards epistemology stems from the question of what the aim of epistemology could be if there were NO minds! If a truth state is only meaningful from a conscious point of view then perhaps minds and epistemology are inextricably entwined. Perhaps you feel that this is not so.
If there is a necessary relationship between epistemology and minds then perhaps what minds are is also important.
Dear Davia, I am sure that there are brains and that there are minds. I take it that brain function is some constraint upon the functioning of minds. This is to reject traditional dualism. Still, I think that conducting epistemology as if there were no minds is not as interesting as conducting it on the supposition that there are. This is to reject any reductive materialism. (A pluralist, as it turns out, is neither a dualist nor a monist.) I realize that others see things differently, and I do not intend to be dogmatic here.
I think it important to emphasize normative aspects of epistemology and/or methods of the sciences, and I do not see how this could easily be accommodated in a purely neurophysiology or physicalist language or account of related matters. We speak of minds when addressing content--perceptions, thoughts, beliefs, etc. Sometimes, of course, it is better to abstract from particularities of belief and address theories instead, but normative elements still come into play, as do the human actors who theorize.
That should give you some perspective on my own approach. I take it, though, that the head question here, about uncertainty and epistemology is open to various other perspectives as well. So, again, what does the uncertainty principle tell us about epistemology?
H.G. Callaway
Here follows a few short quotations, which, as I hope, may help this question along:
"As they are currently formulated, general relativity and quantum mechanics cannot both be right. The two theories underlying the tremendous progress of physics during the last hundred years...are mutually incompatible."
--Bryan Greene, The Elegant Universe, p. 1.
"...the gently curving geometrical form of space emerging from general relativity is at loggerheads with the frantic, roiling microscopic behavior of the universe implied by quantum mechanics. ..this conflict is rightly called the central problem of modern physics.
--Greene, Elegant Universe, p. 5.
If we assume here that QM does tell us of the lack of smooth geometry, at the sub-microscopic scale of the Planck constant, then it seems to follow that the continuous geometry of GR must be a special case of something more general and which cannot itself be generally described in terms of the smooth geometry of GR. The conflict, quoted here, I submit, is the basis of contemporary research on quantum gravity. The conflict, suggest that the evidence supporting QM is, in some fashion, evidence against GR, "as it is currently formulated."
H.G. Callaway
Dear Quintana, Many thanks for your engaging contribution to this thread. I believe a related discussion of Hume on causality is of interest here, and you mention several other possible directions of development. It would seem that Hume's skeptical view of causality at least makes room for QM indeterminacy.
There is a brief discussion of Godel's proof included in the Introduction I wrote for my edition of Wm. James, A Pluralistic Universe. So, the following link may be of some interest. I would emphasize that the heart of my edition is in the criticism of James, scattered through the annotations to his text, though I support James' pluralism and his rejection of the "block universe."
https://www.researchgate.net/publication/262792709_INTRODUCTION_THE_MEANING_OF_PLURALISM_My_interpretation_of_William_James%27_pluralism?ev=prf_pub
Readers may recall that James was an admirer of Hume and J.S. Mill, too, though also a critic. I can appreciate James' admiration, though I tend to think both James and Hume excessively nominalist in outlook. Hume on causality has had a long hold on epistemology in the English-speaking world and more generally. We might ask, "How does QM indeterminacy look in the perspective of Hume on causality?"
H.G. Callaway
Book INTRODUCTION: THE MEANING OF PLURALISM, My interpretation of...
Here follows a short quotation from physicist Max Born, writing in his Nobel Prize lecture of 1954, and defending Heisenberg's interpretation of QM. The entire lecture is available on line at the following address:
http://www.nobelprize.org/nobel_prizes/physics/laureates/1954/born-lecture.html
Quote Born:
How does it come about then, that great scientists such as Einstein, Schrodinger, and De Broglie are nevertheless dissatisfied with the situation? Of course, all these objections are leveled not against the correctness of the formulae, but against their interpretation. Two closely knitted points of view are to be distinguished: the question of determinism and the question of reality. (pp. 263-264).
--end quotation.
Though I have only quoted the opening, I would urge interested readers to read through Born's argument, and the paper (12pp.) completely. Born represents Heisenberg's argument for indeterminacy as following Einstein's example in its form, and I think Heisnberg-Born argument compelling. At the same time, I suspect that Born over generalizes in the direction of a positivist notion of meaning.
There are times, given the array of evidence and the success of theory in prediction, that we definitely should do away with unobservables (usually posits of earlier theory), and this can have a quite revolutionary effect --as in Einstein's elimination of the Maxwellian aether and absolute simultaneity. Born and Heisenberg both see the argument for rejecting the full determinism of classical physics in similar terms. Yet, on the other hand, I want to suggest that such an elimination of unobservables be counted as an hypothesis. Elimination of unobservables can certainly count toward the relative simplicity of such an hypothesis. But, if we are dealing with an hypothesis, alternative hypotheses will usually need to be considered. Regarding indeterminacy, this came in the form of "hidden variable" theories. Thus the relevancy of Bell and Aspect to this question and thread.
H.G. Callaway
Dear Quintana et al, I'm sure there is no hurry with this question. It deserves some thought and time. Born's argument is worth some attention, and it seems to represent the orthodox view of the matter.
I am not myself aware of any significant contemporary support for hidden variables theories. In consequence, I'd be interested to hear of such support, if the point can be explained more fully. I don't have much doubt about formal limits to knowledge, though that is not the focus here. Of greater interest is QM, indeterminacy and its relation to causality. Perspectives may very well depend upon focus regarding the question of hidden variables. Someone especially interested in such things may take to a defense, but in my own view, there is not currently a great deal of interest for such views. The aim here is to keep to the mainstream so far as this proves viable, and in the absence of substantial arguments or presentations, I suspect that not much is to be learned along the lines of hidden variables. What would be of interest, I suspect is a critical discussion of the matter. This takes in entanglement --a topic of fairly broad general interest in my impression.
H.G. Callaway
Callaway's cautious position seems quite appropriate to me, at
least for the time being, since at this time we do not have any
experimental way to test whether hidden variables behave
deterministically or not. Here de Broglie's analogy with
thermodynamics might be illuminating (note that, mathematically,
the Schroedinger equation is a diffusion equation with a
well-defined diffusion coefficient): quantum mechanics as a theory
describing phenomena that, like heat, have a subdynamics not
described by that theory. Deviations from quantum mechanical data
are to be expected only if measurements are executed faster than
the diffusion time of the quantum mechanical description. Such
measurements are not yet feasible. For this reason it is pointless
to speculate at this moment about questions of determinism of the
subquantum dynamics. It has to await faster measurement techniques
than the attosecond techniques available to date.
Remaining within the domain of application of quantum mechanics the
possibility of strict correlations in EPR-Bell measurements
constitutes some indication of determinism, unfortunately by
Einstein and Bell traded off for nonlocality. In this question a
confusion of ontology and epistemology plays an important role,
implemented within philosophy of science by the idea of scientific
realism. This philosophy ignores the essential role of the
measuring instrument in physical theory, which in the modern
theories of physics (i.e. relativity theory and quantum mechanics)
plays an essential double role: a measurement result is both an
ontological property of the measuring instrument (viz. its pointer
position) but it also represents knowledge about the object. It was
Einstein's mistake in his 1935 EPR article to identify the two by
considering the measurement result (i.e. the pointer position) as
an `element of physical reality' (intended to be an ontological
property of the microscopic object). In view of the thermodynamic
analogy referred to above this identification is unnecessary and
even harmful: it would ignore the possibility of a difference
between macrostates and microstates utilized in statistical
mechanics.
I think that the question whether unknowable entities exist is
unanswerable. Any answer will need to be accompanied by a
stipulation of a measurement procedure valid within the domain of
application of the theory the question is formulated in. In my
view the present discussion is induced by the Copenhagen idea of
completeness of quantum mechanics (to the effect that hidden
variables theories would be impossible), an idea based on the
logical positivist/empiricist ideas of the 1930s which nowadays are
considered obsolete (e.g. F. Suppe, The structure of scientific
theories: symposium, 1969, Urbana, Ill.: outgrowth with a critical
introduction and an afterword by Frederick Suppe, University of
Illinois Press, 1977).
W.M. de Muynck
Dear Muynck & Quintana,
Thanks for the comments, links and kind words. It strikes me that the Wiseman paper might be a good jumping off place for discussion of the Bell inequalities, quantum entanglement and the connected literature. He has quite a few references in the on-line piece that may also prove useful.
Take a look at his opening, kindly supplied to us at:
http://www.nature.com/news/physics-bell-s-theorem-still-reverberates-1.15435
I quote from the article,
As Bell proved in 1964, this leaves two options for the nature of reality. The first is that reality is irreducibly random, meaning that there are no hidden variables that "determine the results of individual measurements." The second option is that reality is "non-local" meaning that "the setting of one measuring device can influence the reading of another, however remote."
--end quotation
To start with, I should say that I was a bit dissatisfied with this opening statement of the options. The key word for this is "influence," and this strikes me as not such a good way to state the experimental facts of correlation of measurements in entangled pairs of particles. The facts are that there are correlations of measurements. As the author later stipulates, there is no way to communicate, at a speed faster than light, by means of such correlations. To use the word "influence" at the very start, then invites Einstein's objection that such "influence" must involve, or would have to involve "spooky action at a distance."
Later, the author writes of "causation" via entanglement, in spite of denying the possibility of communication. But my sense of the insistence that there is no communicating via entangled pairs, is that this would require a causal "influence" to propagate faster than the speed of light. It is not that I want to beg the question here, but the opening suggests question begging in favor of an "influence" of measurement of one entangled particle upon the measurement of the other. What is known to be "non-local" is not any influence, but instead the factual correlation.
This opening passage would, perhaps not have bothered me so much except that it was directly followed by a further passage which I also thought somewhat doubtful. I quote from the article:
Most physicists are localists: they recognize the two options but choose the first, because hidden variables are by definition empirically inaccessible.
--end quotation
Here I object to the idea that hidden variables are empirically inaccessible, "by definition." This strikes me as much too strong. I suppose that proposals regarding hidden variables are proposals to look for variables no yet detected, and thus proposals within physics. The proposal that there are hidden variables determining the outcomes of QM measurements and which are empirically inaccessible "by definition," in contrast, strikes me as the worst sort of metaphysics. Why should this idea be hung on anyone --at the very start of the discussion?
I think much more could be said about this article, but I will stop here at the beginning of the piece to see what further interest there may be.
I suppose that correlations of entangled pairs is the scientific fact here. This is consistent with the general character of QM predictions, I believe. The randomness of QM predictions is always randomness within limits set by the theory, and the measurements are never completely random. So, even if randomness is basic and irreducible, we might still count the correlations as simply factual and understand them as a limitation imposed by the formalism.
None of this is inconsistent with seeing the randomness as evidence that quantum mechanical phenomena involve something sub-causal, or the more general idea that causality is an emergent phenomenon. To this point, of course, this is just an idea I've floated in the discussion. But consider, the idea that GR is set as a limit of a theory of quantum gravity, and GR is fully deterministic. It stands to reason that if QM tells us that randomness is simply factual, and that measurements in QM are not fully determined by the formalism, then however, one might generate GR out of QG, this will involved the emergence of the deterministic (fully causal) character of GR as a special case of partly random QG phenomena.
I see no reason to --start out-- from the idea that measurement of one of a pair of entangled particles "influences" measurement of the other. That might be a reasonable conclusion, perhaps. But how would it be supported?
I would much appreciate suggestions for an alternative reading of the article. Some of it strikes me as difficult to navigate, particularly the usage of "non-local."
H.G. Callaway
BTW: There is a fuller version of Wiseman's ideas available:
http://arxiv.org/pdf/1402.0351.pdf
Have a look, if you believe that the related discussion of the Bell inequalities and entanglement will benefit the question here.
H.G. Callaway
I found that Wiseman is not on RG, so I sent him a note, including my critical remarks and letting him know that participants here would welcome his replies.
Howard Wiseman is a physicist at the Griffith University in Brisbane, Queensland, Australia.
I have some further thoughts on what Wiseman has to say, but I will wait upon replies a bit, from participants on RG or from Brisbane.
H.G. Callaway
Hi H.G.
Imagine a classical experiment with two adjoining identical balls whose spatial orientation is random. At one point, the balls move away from each other under the impulse force passing through their centers. On the basis of the laws of classical physics, by measuring the momentum of one of the balls we know the momentum of the other. Does it not follow from this that the concept of non-locality is over-used in the case of quantum physics?
I think so and this is based on the belief that quantum physics differs from classical mainly using the more general concept of probability, see R.F. Streater; arXiv:math-ph/00020491v, 27 Feb 2000
J.H
Dear All,
I heard from Wiseman, who declined the invitation to participate here. My impression is that his paper at arXiv is part of a larger journal issue to be devoted to the topic, so I imagine that those interested in the present question may prefer to wait on the new publications, rather than taking any position now. (I find problems in the longer paper as well, but suppose that it is intended to evoke debate.)
Meanwhile, some readers may enjoy the following MIT video from their Introduction to QM. The focus is on superposition, though entanglement is in the offing, too. Following the short discussion of the course mechanics, you will get a good idea of content in the first 30 Min. or so.
https://www.youtube.com/watch?v=lZ3bPUKo5zc&list=TLQASbb1V7I-gOzwrBeczM84yhxjPIw9J2
I think this helps make it clear that all the typical, counter-intuitive difficulties of QM, including its rejection of determinism are implications of the theory, as confirmed by experimental results. The randomness is really there, according to QM, as Einstein and many others recognized--almost from the start.
Cheers,
H.G. Callaway
Jerzy, Ray Streater is relatively well-known as a spokesperson for a community of people who do not believe in any shade or stripe of wave function realism (we're not even talking of wave function monism here.)
It is then already a matter of interpretation.
Sane & sound people will rather convincingly argue that a Schrödinger equation legitimately can apply to several variables, and that there is ample experimental evidence for that - which is however basically what Streater disputes if we follow his line of reasoning.
What I'm trying to say is that this line of argument is very far from being cut and dry.
Once again I agree with Callaway. Better leave Wiseman's paper on `The Two Bell’s Theorems of John Bell' out of this discussion because it has a too restricted
view on the problem of quantum mechanical nonlocality. Hopefully this will be corrected in the extension referred to on page 4 of that paper.
Wiseman's restricted view can be seen from the following quote from the third page of his paper:
``I am using these symbols both for the values and for the events
yielding them''.
It is evident from this that Wiseman here perpetuates Einstein's mistake identifying measurement results of quantum mechanical measurements (i.e. pointer positions of measuring instruments) with `elements of physical reality' (i.e. objective properties of the microscopic object).
The impossibility of such an identification is a consequence of the Kochen-Specker theorem, contextuality of quantum mechanical measurement results being a way to avoid the problems raised by that theorem.
Contextuality (in the sense of nonlocal influences exerted in an EPR-Bell experiment
by the far measuring instrument on the measurement results obtained in the other arm of an interferometer) are important in Bell's treatment. However, Bell's treatment does not take into account the possibility that the nearby measuring instrument may have an influence too (which is local).
Concentration on the nonlocal part of the influence of measurement has had a distorting influence on the discussion of the meaning of quantum mechanics.
Thus, it is ignored in Wiseman's paper that in Fine's paper (referred to by him) it is
proven that violation of the Bell inequality (BI) is a consequence of the nonexistence
of a quadrivariate probability distribution of the measurement results, independently of whether interactions are local or nonlocal. Indeed, violation of BI is a consequence of mutual disturbance of measurement results of incompatible observables jointly measured in one and the same arm of the interferometer rather than a consequence of nonlocal interactions between different arms.
A discussion of the Heisenberg principle certainly has an important epistemological side, viz. it can illustrate the theory-ladenness of observation/measurement which has been discussed extensively within the philosophy of science during the last century. Contrary to Bohr's ideas quantum mechanical measuring instruments are not classical,
but they should have a part that is sensitive to the microscopic information they are
supposed to extract from the microscopic object. Hence, the problem is a problem of
the quantum mechanical interaction of object and measuring instrument. Of course, it
is justified to try to discuss the same measurement in terms of hidden variables,
which would either have to yield the same outcomes as quantum mechanics, or
deviate from quantum mechanics for measurements outside the latter theory's domain of application.
W.M. de Muynck
Hi Chris!
I do not agree with your opinion that: " a Schrödinger equation legitimately can not be apply to several variables if we follow Streater's line of reasoning".
Streater in the work of the 2000 only notes that physicists generalised concept of probability so that they can be applied to atoms and molecules. This generalization, in my opinion, lies in the assumption that the probability P has the structure: P = AxA *. This trivial requirement together with natural rules concerning amplitudes A, allows the connection of Laplace's principle of equal ignorance with description of the dynamic physical systems which in turn leads to the Schrodinger equations.
Cheers
Dear Hanckowiak, I'm not sure this will help in your exchanges with Ransford, but it seems to the point here to remark that everyone agrees that the Schrodinger equation is deterministic. The randomness and indeterminacy are found in the supporting evidence. That is why so many thought, initially, that Schrodinger would save physics from admitting Heisenberg's uncertainty principle (and his "matrix mechanics"), the mathematics of which was much less familiar at the time --say 1925-1927. Even Einstein thought there was a difference at first. But Schrodinger himself proved the equivalence of his own wave mechanics to Heisenberg's matrix mechanics, and when Heisenberg introduced the uncertainty principle, that sealed the case against a full determinism, of the sort familiar from Newtonian physics, Laplace --and from GR.
H.G. Callaway
Jerzy,
I find a number of inconsistencies in Streater 's work so it's perhaps best to let him speak and not derive in his stead logical consequences from what he otherwise says....
For instance, he scoffs at the notion of a wave function applying to the whole universe, which is a contradiction if you consider that the universe began from a pointlike singularity AND that Schrödinger accommodates several variables.
All the observed phenomena since the beginning of the universe are readily explainable through a combination of decoherence/coherence events in time, and through a "hierarchy" of wave functions encompassing coherence bonds of various strengths.
Streater's approach though is to deride this hierachy and gainsay that it exists at all, which immediately throws off questions such as a) where is the coherence cut-off in time and why would that cut-off exist to begin with, and b) if a collection of atoms and molecules can be bound up within a Schrödinger equation, then why not the set of ALL molecules and particles that exist (to wit, the Universe itself) ?
Indeed, if you reverse-engineering from a mathematical assumption that the notion of the wave function Ψ(all particles in the universe) is ridiculous (which is what he says), then you soon unavoidably end up at the notion that Ψ(a,b) is ridiculous as well .....
Dear Callaway and Cris!
I think we all agree that QM (quantum mechanics) is not deterministic in the sense of Newton's theory, etc. It should be added that it is also not deterministic in the sense of the theory of hidden variables (see a consequence of Bell's theorem). The wave function satisfying the Schrodinger equation has a probabilistic interpretation, but it is not a probability of certain events, but only the probability amplitude. In other words, the probability of QM has a structure.
This has important implications causing differences between of statistical classical physics and quantum physics.
When it comes to Streater, who scoffs at the wave function describing the universe, this may derive from from the fact that there is only one universe and to check the consequences of statistical theory we would need many universes, and unfortunately, not everyone is Hugh Everett:-)
Cheers
Dear Ramon,
Nobody should be surprised about an analogy between classical waves and
quantum mechanics since Schroedinger was inspired by precisely this analogy when developing an alternative to Heisenberg's matrix mechanics. Therefore the analogy of oil drops and elementary particles might have been expected, in particular by those who have put question marks behind the silly Copenhagen idea that `unobservability of position' is the same as `not having a position' (note that Bohr did not make this
identification himself; he may have been aware that `present unobservability' need not imply `unobservability for all observation procedures to be developed in the future').
It is easy to become enthusiastic about a paper corroborating one's own ideas,
and jump to conclusions. The paper you draw the attention to bears witness to such, in my view unwarranted, enthusiasm by comparing the oil drop experiment with Tonomura's interference experiment of light depicted in one of its pictures. There is a large difference between this experiment and the oil drop one, to the effect that
the latter considers an individual drop whereas Tonomura's experiment registers the cumulative effect of many photons building up the interference pattern (as is represented by the five small pictures shown underneath the main part showing the cumulative pattern).
Rather than a similarity the paper illustrates a profound difference between the two experiments, the oil drop experiment yielding interference for an individual oil drop, whereas in Tonomura's experiment the interference pattern only appears for an ensemble (this is only represented by the fifth small picture to the far right). Actually, I think that the result of Tonomura's experiment is a more convincing argument for the existence of a subquantum reality than the vague analogy with oil drops boosted by the paper you refer to.
Willem de Muynck
Many thanks for the recent contributions. I much appreciate the discussion, and I am sure I have learned a good deal through the rich interaction.
My conviction is that hidden variable theories are, by now, old hat. I've though so for a very long time, and my impression, from a great deal of reading, is that most theoretical physics agree on the point. The more recent confirmation, as in the work of Aspect, is basically decoration on the cake. Further testing on entanglement will no doubt be carried out and refined, but basically, entanglement is a kind of super-position at a (light-speed relevant) distance. It is certainly an amazing phenomenon, but the prospects that it will be found to involve some sort of causal relation, of use to hidden variable theories, is pretty doubtful.
The only doubt I can summon up is connected with the idea of super luminary travel, as in some recent proposals concerning travel by means of contraction of spatial distances (more or less as in Star Trek, it seems). But that has also seemed pretty doubtful to me, as simply requiring so much energy. However, there is a related idea involved in the theory of early cosmic expansion. I am yet to read of any proposals on how or whether such prospects might be brought into relation with quantum indeterminacy. In any case, it seems clear that these ideas would take us pretty far beyond Eddington and his views of 1928.
Any thoughts?
H.G. Callaway
Perhaps the followers of this thread will find the following article of interest:
T. Bilban, Epistemic and ontic interpretation of quantum mechanics:
Quantum information theory and Husserl's phenomenology,
Time and Matter 2010 Conf., Budva, Mentenegro:
http://tam.ung.si/2010/proceedings/120-bilban.pdf
Bilban observes that the transition from quantum to classical is based on the logical postulate that to describe something it is necessary to be outside the described set. This postulate operationalistically explains the cut between quantum and classical in the process of measurement and is thus (more or less) identical to Heisenberg’s consider-
ation of this problem known as “Heisenberg cut”. “There arises the necessity to draw a clear dividing line in the description of atomic processes, between the measuring apparatus of the observer which is described in classical concepts, and the object under observation, whose behaviour is represented by a wave function” (p. 169).
Dear Peters, Many thanks for your suggestion. I believe the topic is worthy of some attention.
In my impression, the contemporary emphasis on "decoherence" in quantum mechanics has superseded the older Copenhagen stress on the distinction between the QM system and the observer. That is to say that interaction with an observer or measuring apparatus is only one among the interactions which may produce decoherence, or collapse of the wave function. This in turn happily puts the topic on a more objective basis. This idea is the more recent take on the "measurement problem."
Perhaps you and Ranford will be in a position to elaborate, but as it seems to me, starting with decoherence, there will be less stress on the realist postulate or on the contrast between QM and classical concepts. In any case, I suppose that the classical concepts must re-enter somewhere along the lines of prospective theoretical developments, since we want quantum theory, taken as universally valid, to converge on classical theory at an appropriate limit.
H.G. Callaway
Dear all, I'd like to point out a new paper, by Hal M. Haggart and Carlo Rovelli, which has just been made available. The short title is "Black hole fireworks."
https://www.researchgate.net/publication/263663012_Black_hole_fireworks_quantum-gravity_effects_outside_the_horizon_spark_black_to_white_hole_tunneling?pli=1&loginT=uaZzfLioTCoNZeHtosZlDEIAOJJcAQDA1rjpImBETmo*&uid=8e320d55-ef43-47f2-a85f-bac5269dc83d&cp=re289_x_p12&ch=reg
(That's the RG address). I think this paper interesting for the present question is so far as it concerns the relationship between GR and QM--and, of course, in relation to the theme of "black holes." As I understand it, this is a work in mathematical physics, related to, but differing from Hawking's and Beckstein's work on black holes and black hole radiation. I wonder if we might get some comments on the paper which would be helpful for present purposes. One basic idea is the "tunneling" of a black hole into a "white hole," and there are some related considerations of causality.
In substantial agreement with a point I take from Hawking, they write:
...certainly classical GR fails to describe Nature at small radii, because nothing prevents quantum mechanics from affecting the high curvature zone, and because classical GR becomes ill-defined at r = 0 anyway.
---end quote
Have a look.
H.G. Callaway
Article Black hole fireworks: quantum-gravity effects outside the ho...
In response to Callaway's quote given above I would say that it is
good to read that, evidently, GR is no longer considered as a
(hopefully) universally valid theory, but that it is supposed to be
applicable to a restricted physical domain only. Similar
considerations about quantum mechanics may have led 'tHooft to turn
to hidden-variables (sub-quantum) theories to deal with problems
met at the Planck length. The importance of a theory's domain of
applicability has particularly been stressed by the structuralist
view of physical theories discussed in F. Suppe, The structure of
scientific theories (1977). Even though this view, like any other
one, can not be proven to be universally valid, is it from a
pragmatic point of view, worth to be taken seriously.
W.M. de Muynck
Dear De Muynck, That GR is no longer considered universally valid, seems to me to be most basically an implication of taking QM quite seriously. ("Material implication," if you will.) I am aware that people had their doubts about the universality of GR beginning in the 1920s--and including Arthur Eddington. Eddington thought even the early quantum theory with its "atoms" of action didn't fit together with the continuity and determinism of GR. This doubt intensified with the advent of Matrix mechanics and the uncertainty principle.
Trying to bridge the tension between GR and QM is a chief interest of people working on quantum gravity, and in consequence, I think we owe much of the credit to those particular investigations and theorists--among whom I would certainly include Hawking, Penrose (and many others!) and also the people most responsible for the development of quantum loop theory. Of these, I am familiar with something of the work of Rovelli--always a very clear writer and thinker. There are others who should be mentioned at some point, too.
Some of my philosophical colleagues have been great admirers of Suppe and structuralism--as associated with his work. "Structuralism," of course, has many different meanings, and I doubt that anyone accepts all the varieties. (Where I come from there is a men's clothing store called "Structure," and if you want to look like the establishment, perhaps that's where you go!) Certainly, I do not think that the early structuralism of Eddington stands up very well. A key issue, more generally, I suspect, is whether one can make out any general and viable distinction between "structure" and "content." If that distinction fails or varies, then we've got a very large "fudge factor." An alternative may be to simply look to acceptability of theory and the variations in live development of alternatives--taking the variability of approach seriously. If classical determinism could go, what might not be next?
H.G. Callaway
Dear Callaway,
I do not think that quantum mechanics is a reason to deny GR
universal validity. I do not see any reason to take quantum
mechanics more seriously than GR in this respect. Also quantum
mechanics may have a restricted domain of application. Actually,
standard quantum mechanics (i.e. the quantum mechanics of textbooks
and of most research papers) has a restricted domain (actually
Haroche and Wineland earned their 2012 Nobel Prize for extending
that domain), and, although generalized qm (in which observables
are described by positive operator-valued measures) has a larger
domain, we have no reason to believe that it is universally valid.
In particular is it not outrageous to think that a theory,
developed to describe microscopic phenomena, may not be applicable
in the macroscopic (let alone cosmic) domain. Due to
`theory-dependence of observation statements' it is quite
reasonable that observation procedures which are very different in
the domains of two theories will require the theories to be very
different too.
W.M. de Muynck
Dear De Muynck, You make an interesting argument, but I am not convinced. One way of looking at the matter is that the domains of GR and QM overlap. Perhaps your thought is that the domains of application are typically different or that they can be made different by stipulation--or by some formal artifice. But given GR, we get the prediction of singularities in intensive gravitational fields. Any mass, above a certain density will collapse into a black hole with a singularity at its center. This point has been evident, at least, in the quotations from Hawking and from Haggart and Rovelli. Formally, this is a matter of a point of infinite density --which is taken as a clue to the effect that Einstein's equations break down --projected into such extreme conditions of density. (Literally, they predict something known to be physically impossible.) Somewhere before reaching the mathematical extreme of a point of infinite density, we get to the scale of the usual application of QM, with its implications of a minimal Planck length. The argument of Haggart and Rovelli concerning a "quantum bounce" comes in at this connection.
I think other arguments could be made concerning domains of application, but it seems to me that if we artificially restrict the application of QM to the (sub-) microscopic, then certain problems are being ruled out of consideration artificially. One example is the problem presented by the Gedanken experiment of Schrodinger's cat.
The microscopic and the macroscopic are, obviously, connected, since relevant measurements of the microscopic show us this. I take it that the "theory dependence of observation" should primarily be understood as informing us about how theory may facilitate observation, and related limitations are not properly understood as prohibitions. For example, the Newtonians could make the same observations and measurements of the bending of the path of starlight --which confirmed GR.
H.G. Callaway
Dear Callaway,
I would say that the singularites of certain theoretical descriptions are sufficient reasons to doubt applicability of the theory at least to that part of reality for which the singularities are predicted. Every physical theory will produce deviations from actual observation when the boundary of its domain of application is approached. Schroedinger's cat is even far beyond the boundary of quantum mechanics since the latter theory does not tell you anything about the behaviour of an individual cat. Maybe a cat can be used as the pointer of a measuring instrument in a quantum mechanical experiment. But within the quantum mechanical theory we do not have any possibility to connect the measurement result (i.e. pointer position `cat dead' or `cat alive') with a quantum mechanical property the atom had prior to the measurement. Quantum mechanics can yield only statistical information in an ensemble.
W.M. de Muynck
Dear De Muynck, What you say about singularities and the connection to "applicability" strikes me as somewhat ex post facto reasoning. First we have the Einstein field equations of GR, and they predict singularities. But singularities are problematic, so one decides (after the fact) to restrict applicability. In consequence, we only find out what is to count as the "boundary of its domain of application," after the difficulty is discovered. That sounds a lot like a fudge factor designed to save the theory from possible counter-example--in the present context. Of course, if you said, simply, that it is safe to use GR as long as we avoid the singularities, then there would be no objection.What is your point? The theory still has the problem even if we only use it where its application is unproblematic.
But consider the corresponding argument to save Newtonian physics from Einstein's counter-examples. Are we to say that there was really no problem at all with Newtonian physics, in spite of Einstein's quantitative prediction of the bending of the paths of starlight in the vicinity of a massive object like the sun (during an eclipse, of course), or that there was really no problem with the failure of Newtonian physics to predict or explain the precession of the perihelion of Mercury--because the Newtonian theory does not apply at these extremes and limits? Was Einstein's GR confirmed as against Newtonian gravitation or not? If you artificially and ex post facto restrict the applicability of Newtonian theory, then, apparently, GR is not a superior account of the related phenomenon. But that is just false.
By the same token, if the singularities are genuine problems of GR, then though we can still use the theory, within certain restriction, there is a genuine problem of the theory, and it makes sense to look for a superior account of the related phenomena. If you dispute this point, then I think you have a problem with Rovelli & Co.
Your reply concerning Schrodinger's cat invites a detailed rehearsal of Schrodinger's argument. It strike me though that your conception of "domain of application" is pretty conventional--apparently related to the problems of common or usual application, rather than logical. So, I am inclined to reiterate, if GR implies the singularities, and the singularities are physically impossible, then, that is a genuine problem of the theory, not an invitation to ex post facto juggling. After all, we can still use Newtonian physics to shoot men to the moon, though it is not a true theory.
H.G. Callaway
Dear Callaway,
Theoretical physics is not afraid of singularities. Actually, for
calculational purposes they are very useful, like, for instance,
Green's functions. However, Green's functions are not supposed to
be fundamental. They are mathematical tools, their physical
applicability being restricted to observation procedures which do
not have sufficient resolution to distinguish a real object from a
point-like one. The Einstein equation of GRT is no exception with
respect to this issue.
In GRT there are other places where singularities occur. For
instance, when describing an object freely falling into a black
hole, then, as seen from the earth, this object never crosses the
Schwarzschild radius. However, as seen from the object itself such
a crossing does occur. This difference can mathematically be
described by a singularity occurring in the Schwarzschild
coordinate frame of the earthly observer, which singularity does
not occur in the Kruskal-Szekeres coordinates of the freely falling
one.
It seems to me that these singularities are no reason to abandon
GRT. This theory is perfectly applicable if the way measurements
are carried out is duly taken into account. In both cases the
singularity can be understood as a result of a failing observation
procedure, rather than as a property of a black hole, the latter
choice being made in a realist interpretation of the mathematical
formalism of GRT (which is the usual textbook interpretation). The
empiricist interpretation of (the mathematical formalism of)
physical theories makes the former choice. As regards
Schroedinger's cat it could be questioned whether the usual realist
textbook interpretation of quantum mechanics is the most
appropriate one. An empiricist ensemble interpretation of (the
mathematical formalism of) quantum mechanics (nowadays
experimentally corroborated) at least evades this problem.
It seems to me that you do not like the empiricist interpretation
because it is `ex post facto'. You may be right on that
qualification: physical methodology takes into account experimental
facts. Note, however, that physical methodology does not abide with
Popper's falsificationism. Classical mechanics, although
inapplicable within the atomic domain, has never been abandoned. It
is still applied within its own domain of application (i.e.
macroscopic physics), even though it has been falsified in the
Popperian sense time and again. Singularities are just extreme
examples of a much wider issue, viz. the general necessity to take
into account the influence of measurement on the obtained
measurement results when comparing theory and experiment.
Probably you are troubled by the apparent circularity that is
involved in this. I have been convinced a.o. by the work of
Nicholas Rescher (e.g. Methodological pragmatism : a
systems-theoretic approach to the theory of knowledge), however,
that this circularity is non-vicious. I think that Suppe's (and
many other's) structuralism (perhaps without its emphasis on
models), indeed, is sufficiently pragmatic to fit, better than the
caricature Popper has authored, the methodology that experimental
physicists employ.
Willem M. de Muynck
Dear Muynck, Thanks for your further thoughts on the question. At first reading, I have the curious impression that you have not actually said anything which disagrees with my prior notes, though all the rhetoric points toward your claiming to do so. You bring in some matters not discussed before in this thread, such as Popper and falsificationism, but this seems neither here nor there.
If you aim to defend your conception or understanding of structuralism, that's fine by me. I'm just wondering what this amounts to on your view. In spite of your mention if very impressive sources, this seems to me quite unclear.
I believe it is the singularities predicted at the center of a black holes which are most to the point, and not the question of whether mathematical singularities are some times harmless enough. I find it very strange that you should write that singularities "are no reason to abandon GRT." This suggests that you did not understand my corresponding argument: I wrote that we could confidently shoot a man to the moon using Newtonian theory," and so in a similar way, we could confidently shoot a rocket to Mercury, carefully taking Einstein's (not Newton's) advice about where it will be when the rocket might arrive. In spite of your suggestions to the contrary, there is no dispute here about making good use of existing, established, or even historically valuable theory wherever and whenever problems do not arise. But as Einstein showed, as against Newtonian theory, there are situations where problems do arise.
Among these, I reiterate, is the interpretation of GR as it predicts points of infinite density within black holes. If this is not a problem, as you seem to contend, then your version of "structuralism" seems to make no room for theories of quantum gravity which start from the supposition that it is a problem. You provide no answers on this, and you do not address the question. Thus, it strikes me (IMHO), that you have put the methodological cart before the scientific horse. Your methodology seems to tag along compliantly after the established scientific facts, which is fine with me, but then suddenly assert its superiority to contrary directions of research in a rather (too strongly) a prior fashion. Is that what we are to understand as the structuralism you recommend?
H.G. Callaway
Dear Callaway,
I have appreciated the Socratic way you were leading our
discussion. However at the same time we were drifting away from the
subject of the thread, and lost in the process all other
participants. It is not impossible that the Socratic methodology
does not work in our modern times. Since it seems to me that in
this way our discussion does not meet the didactic standard I would
like it to have, I shall no longer participate in it. Instead, I
will, as a final contribution, give a brief account of the way I
think physics can be a useful discipline.
The methodology I think to be best suited for the purpose of
experimental physics can be characterised as a pragmatic empiricist
one. It starts from the mathematical formalism of a theory, like,
for instance standard quantum mechanics, which might be available
as a mathematical formalism developed by mathematicians possibly
for quite different reasons, or by theoretical physicists having
mathematical capabilities. It is considered as a `mathematical
formalism in search of a physical domain of application'. Which
experimental processes are described by it? I am not interested in
the truth of the theory, but just in its applicability. Only if the
domain is empty, is the theory completely useless. Of course, it is
more useful as its domain is larger.
Singularities are supposed not to belong to the domain of any
physical theory because we are not able to measure infinite
quantities. I have no quarrel with Rovelli as to his dealing with
theories having such singularities as long as he treats these as
auxiliary mathematical quantities like we do when using Green's
functions. If he considers them to correspond to real entities I
think he just plays mathematical games.
More generally, theoretical concepts of physical theories should
not be taken too seriously: they often are based on theoretical
ideas (like singularities, rigidity, or curved spaces) that have no
counterpart in experimental physics. For this reason I think, at
least for the quantum mechanical formalism, that a realist
interpretation of its theoretical entities is not always justified.
I require from the formalism that it describe quantities corresponding to `what we see by means of our measuring instruments', not to `what there is'. This is what I call an empiricist interpretation. This does not imply that
there would not exist other quantities than the ones we see; it
only means that these latter quantities need not be described by
the theory in question (incompleteness of quantum mechanics in the
wider sense). Note that the empiricist interpretation restores a
certain notion of truth (viz. the truth of measurement results of
measurements that have actually been realized). As to standard
quantum mechanics: Einstein's elements of physical reality do not
belong to that category.
I restrict myself to this short review. A more extended account can
be found on my website: http://www.phys.tue.nl/ktn/Wim/muynck.htm
as well as in my publications that can be found there. I think that
the pragmatic empiricism involved in the structuralist methodology
is pretty close to the way experimental physicists are dealing with
their discipline. I am also aware that this is not the methodology
of many theoretical physicists, who are often indulging in the
exciting intricacies of the products of their minds without having
too much attention for the possibilities and impossibilities of
experiment. An empiricist interpretation is probably the weakest
possible interpretation of the mathematical formalism of a physical theory,
which, indeed, is pretty far from Hawking's idea to be able to look
into the mind of God. The goal of the pragmatic empiricist
methodology is a modest one, viz. to take just a little step over
the realist interpretation by taking into account the influence of
the measurement procedure (next to the influence of the object),
this influence being largely ignored by the realist interpretations
we have inherited from the period of classical physics.
I agree with you that this methodology lacks the high flight of
imagination that is characteristic of work in the quantum theory of
gravity you referred to. But in my view a big advantage is that the method has
already been successful in leading to an extension of the domain of
quantum mechanics to experiments that could not be described by the
standard formalism. Rovelli will have to wait many years,
--probably even in vain--, before his singularities can be studied
by experiments testing physics at the Planck length.
Dear Wilem, at least one question: instead of pragmatic empiricm, what about the claim that a physical theory is just a construction that makes room for some empirical quantities. Thank you for your enlighten summary>
Dear Guido,
It seems to me that what I refer to as pragmatic empiricism is not very different from your `construction that makes room for some empirical quantities'. I completely agree with your characterization, because a realist (rather than empiricist) interpretation of the mathematical formalisms of our modern physical theories is at the basis of most paradoxes of these theories.
My reference to pragmatism is methodological, and meant to convey that, indeed, not all empirical data need to be described by one and the same theory. Both empiricism and pragmatism are meant to evade seeing modern physical theories as describing complete models of a real world in the same way this is usually intended in classical physics. Even in classical physics such a realist interpretation is only possible in a superficial way, ignoring all phenomena that point to a microscopic constitution of macroscopic objects.
Dear De Muynck,
I am myself quite satisfied with the development of the discussion on this question. The number of participants, or the number of participants at a given time, matters much less that the quality of the contributions. Thank you for your many fine contributions.
IMHO, we have made good progress. This is not to say that everyone agrees on the chief themes, I would be rather disappointed if that were to happen, since it would suggest that the question of of no great interest. It is natural that people disagree on a question such as the one we have here, and it is natural, too, that people should stop and think about it, before making any further contributions.
I suspect we disagree about the status of theoretical entities, quantities, etc.
you wrote:
For this reason I think, at least for the quantum mechanical formalism, that a realist
interpretation of its theoretical entities is not always justified. I require from the formalism that it describe quantities corresponding to `what we see by means of our measuring instruments', not to `what there is'. This is what I call an empiricist interpretation.
--end quotation
Generally, I think it is conceded that theoretical entities are incapable of full definition in terms of observation or observational states or characters. But we do not treat theoretical entities as less than real for that reason alone. So it is, too, with the probability waves of the Schrodinger equation. Certainly realist doubts are sometimes expressed about this idea. One can say that it captures the evidence, though we do not quite understand how this is. But there is less room for realist scruples about the random character of the supporting evidence. Or, so it seems to me.
Sometimes, as it seems, on the very edge of inquiry, a quite positivistic picture of what is going on seems quite appealing--we do not yet know precisely what we are dealing with, and that kind of situation, we may want to attend chiefly to the evidence and correlations, leaving the conceptual summing up and consolidation for later. More realist pictures become appealing once we get to a stage of inquiry where summing up is more plausible and more viable. So, I say, empiricism, yes! Pragmatism? maybe!
Sometimes, you'll recall, "pragmatism," has become almost a dirty word, or a word of accusation at least. See my review of Haack for a bit on this. She uses the term "vulgar pragmatism," which basically means something like "anything goes," or "whatever it takes" (to get what one wants). So, my point is that there is quite definitely, "better and worse" in the pragmatic tradition, and attention to that is sometimes definitely required.
Your own contributions, in my judgment, have been of very high quality, and I want to emphasize that point! Leaving the field with questions open is no dishonor. Some questions may well be open for quite some time, and we all have other things to do.
H.G. Callaway
dear Willem, can your methodologic aproach contribute to explain Loschmidt's paradox
Dear Guido,
I never studied Loschmidt's paradox indepth, but at face value I see a pragmatic solution to it. It is based on the assumption that the Newtonian physics as applied to a system of a few particles only interacting with each other can be used to deal with systems of $10^23$ particles, that moreover, are confined in a finite container in which collisions with the walls have an influence on the dynamics. Stated differently, I would say that Loschmidt applied Newton's theory outside that theory's domain of application.
Dear De Muynck & Verstraeten,
Loschmidt's paradox is certainly of interest for purposes of the present question.
Looking around just a bit, I found the following article from 2009:
I quote here the abstract:
A quantum solution to the arrow-of-time dilemma
Lorenzo Maccone
(Submitted on 4 Feb 2008 (v1), last revised 25 Aug 2009 (this version, v3))
The arrow of time dilemma: the laws of physics are invariant for time inversion, whereas the familiar phenomena we see everyday are not (i.e. entropy increases). I show that, within a quantum mechanical framework, all phenomena which leave a trail of information behind (and hence can be studied by physics) are those where entropy necessarily increases or remains constant. All phenomena where the entropy decreases must not leave any information of their having happened. This situation is completely indistinguishable from their not having happened at all. In the light of this observation, the second law of thermodynamics is reduced to a mere tautology: physics cannot study those processes where entropy has decreased, even if they were commonplace.
--end quotation
You can find the article at the following address:
http://arxiv.org/pdf/0802.0438v3.pdf
Have a look.
It strikes me that this is a quite theoretical piece, but certainly interesting. I would only emphasize here, in the first place, that we don't want "pragmatic methodology" blocking the road of inquiry--to use Peirce's phrase.
H.G. Callaway
Dear Callaway and Verstraeten,
I completely agree with Callaway's remark that the road to inquiry should not be blocked by a pragmatic methodology. But I do not think that there is any danger that this will occur if at the same time empiricism is duly taken into account. Indeed, every theory can be used as a starting point, and be developed as far as possible. However, it is sensible to ask every now and then whether your theory has a nonempty domain of application.
Dear De Muynck,
I completely agree with your brief statement directly above. In particular, I agree that "it is sensible to ask every now and then whether your theory has a nonempty domain of application."
The remaining question, put in your terms, is that regarding the physical significance of the "failure of application" of GR to extreme gravitational curvatures, as inside a black hole. We agree that there are no points of infinite density.
You seem to doubt that this has any physical significance, even for theoretical developments, such a quantum gravity. Most people in theoretical physics seem to disagree with you about that.
H.G. Callaway
Dear all,
Those who have been following this thread may find the following video, on "Einstein and Eddington" of interest:
https://www.youtube.com/watch?v=BG2sDVjL1wg
It just recently became available on line, so far as I know; unfortunately, I am not sure in which countries it can be viewed. The video originated from the BBC.
I suspect the story is a bit romanticized, in this account, but it covers the period up to Eddington's eclipse observations of 1919, and their acceptance --resulting in great and lasting fame for Einstein of course.
H.G. Callaway
see Charles Sanders Peirce's Making Our Ideas Clear. The only certain thing is uncertainty.
dear H.G. a thermodynamic definition of entropy doesn't reduce entropy to gain or lack of information but to somehow energy that cannot be transformed to produce mechanical work. Does it mean that there are different non- equivalent definitions of entropy according to the applied methodology?,
Dear Verstraeten,
I think that the definition of entropy is a very interesting question. In the 19th-century origin of discussions of entropy, the central questions concerned heat engines and the availability of energy to do work. Engineering and steam engines were an original focus of interest. Afterward, the topic became more theoretical.
The relationship of entropy to information is (or is chiefly) a more recent or contemporary development, in my understanding of the history. Certainly, we can think of information as something available to do work. Consider "knowledge is power." Yet contemporary discussions of information are quite abstract, and information cannot be equated with knowledge. There is much more to the matter.
Developments of this topic are now (or recently) hotly debated, though I think there is some general agreement on the relationship between entropy and information. All of this comes up, for instance is the literature concerned with the entropy of black holes--concerning which, of course, Eddington remained long a skeptic --of the work of Chandrasekhar, for instance.
H.G. Callaway
Dear Professor Callaway, thank you for your comments. the relation between thermodynamic entropy and the Shannon Entropy is produced by the Boltzmann'constant Kb. If not there should be problems for the equivalence of both definitions of entropy. However, what is the deep nature of Kb. Is it a funadamental physical constant or a methodologic comstant?
Dear Verstraeten,
Many thanks for your question, which is a good addition here; and I just recently came across some comments on another thread which broached closely related questions. Note also Quintana's lengthy and suggestive note just above.
Since the questions and suggestions are complex, I would suggest going at this in a step-wise fashion. First of all, then, what is entropy? I have found both more scientific and less scientific definitions and accounts of the term, and of course, as we've already discussed, briefly, the idea has a history. Some of the history is reflected in the ordinary dictionary definition (Webster's)
Entropy: (1875, from a Greek word for change or turn),
1. A measure of the unavailable energy in a closed thermodynamic system that is also usually considered to be a measure of the system's disorder... broadly: the degree of disorder or uncertainty in a system.
comments:
Etymologically, "thermodynamic" concerns both heat and motion, as in a steam engine. So a fundamental question was to specify the general conditions under which heat energy can be exchanged for energy of motion. That the system is "closed" means that it is not exchanging energy with the outside. So, if our steam engine is a closed system, then we see that the ability to do work, or produce motion depends on the contrasting temperatures of the firebox and the rest of the system. Once the entire system reaches thermal equilibrium, then no more work can be done, it reaches a kind of "heat death"
2. The degradation of the matter and energy in the universe to an ultimate state of inert uniformity.
Comment:
This is the famously, threatening "heat death" of the universe.
3. Chaos, disorganization, randomness.
Comment:
Here we make some initial connection to information, since we can think of "randomness" as the static or noise in a signal which competes with any information transmitted by it.
There is an important historical distinction between thermodynamics, in its original engineering development, on the one hand, and "statistical thermodynamics," which is closely connected with the name of Boltzmann. At first, and in some quarters, long afterward, the laws of thermodynamics were treated as fundamental laws of physics--and on that approach, the second law was taken as categorical, or without exception. But in accordance with the statistical conception of thermodynamics, exceptions to the 2nd law (which says that the entropy of a closed system never decreases), are physically possible although extremely unlikely.
This is because the number of disordered states of any plausible system taken as an example is extremely high, and the number of ordered states extremely low. Thus, if a system starts out ordered, it is extremely likely to be more disordered later in its development. (Order requires maintenance.)
But to take the usual sort of example --of a glass of water falling off a table and shattering, it is physically possible for all the shards to collect themselves into a glass and jump back up on the edge of the table. This is not the sort of thing that we expect to observe, of course, and in consequence the normal increase of entropy has frequently been invoked as indicating the direction (or arrow) of time. While Newton's laws work just as well backward or forward in time, and this gives some sense to the possibility of (unlikely) exceptions to the second law, the second law seems to point to something different.
Now to get closer to the conception of entropy connected with information, I think we have to think some about the concept of randomness. Confusingly, I find that the "heat death" idea is often described in terms of approaching uniformity. Some will, no doubt think of uniformity as a kind of order. But in contrast to that notion, it is better to connect the "heat death" idea (or thermal equilibrium) with uniform randomness. If a signal is uniformly random, then, one might say, it is all noise or static and there is no information in it. This suggests, in turn, that as entropy increases, information decreases. Information seems to be a kind of ordering of a system. But what exactly is randomness?
In an important thought experiment, connecting entropy and black holes, the idea arose that if we have some very disordered system, then we could get rid of it by throwing it into a black hole --I believe this originated with Beckenstein and was later taken up by Hawking. The problem is that entropy seems to decrease, since what goes into a black hole is unrecoverable, and the entropy of the system outside the black hole has be decreased by so eliminating the disordered system. To preserve the 2nd law, the idea arose, in reply, that if the disorder in the system outside the black hole decreases, then the entropy of the black hole must increase correspondingly. Thus black holes must have entropy. The mathematical physics of all this demonstrated that the entropy of a black hole is proportional to its mass and the area of the event horizon.
But what of the information that might be associated with what we throw into the black hole? Whatever is thrown in must have some specific material and state and energy, and it would seem that the details of all this are eventually lost. Anything thrown in gets strung out and decomposed into elementary particles and eventually simply energy, and according to Hawking what comes out again is a kind of uniform, low-grade radiation, the "Hawking radiation." Has the information, encoded in the specific character of what was thrown in been lost? Is it somehow recoverable? Or, in other terms, is the process involved in throwing the disordered material into the black hole reversible (as with processes covered by Newton's laws) --in principle?
These questions seem to me to have something to do with the randomness of QM, and with whether we focus exclusively on the uniform evolution of the wave function, or instead look to the randomness involved in measurements and in the collapse of the wave function. But I should add, that I am not doing physics here, but merely trying to describe, by reference to the related debates of the theoretical physicists, some of what is involved in discussions of entropy and information.
H.G. Callaway
Philadelphia, PA
December 5, 2014
Dear Nagata,
I don't see that the wave/particle duality of QM divides in quite the way that you suggest. Regarding particles, in particular, the uncertainty principle tells us that we cannot simultaneously determine an exact position and momentum beyond limits which are a function of Planck's constant. There are also various other properties of particles, similarly linked, so as to prohibit or defeat simultaneous determinations.
In consequence, if epistemology is related to the properties of particles, on your account, then it would seem it is equally relevant to the wave conception--according to which the "waves" in question are, in any case, standardly interpreted as a matter of probabilities.
As came up earlier in this thread, I believe, the uncertainty principle appears to tell us what we cannot know--or better, perhaps, what questions we should stop asking. Regarding the wave conception, probabilities can be calculated, but not the details of the outcomes of measurements. So, prima facie, the uncertainty principle does tell us both something about the world--there is a fundamental randomness in it-- and something about knowledge (or epistemology); about what we can reasonably hope to come to know.
H.G. Callaway
Philadelphia, PA
Dear Nagata,
My best judgement is that QM and the uncertainty principle are basic and general, according to our best scientific accounts of the matters. Part of what is involved in the discussions on this thread is simply to state and understand the accepted theory as clearly as may be without a great deal of mathematics and physical jargon. (That is not to say that it could never be overturned.) Of course, we expect new applications as well, and possible elaborations. Plausible alternatives should also be open to discussion.
Bold scientific assumptions are all to the good, at least in particular cases or regarding particular sorts of problem configurations. They are much better, however, if one can supply some plausible means of testing the assumptions. Theory will sometimes proceed without clear means of testing or without suggesting plausible experimentation or observational evidence. But in such cases, theory is, then, more speculative.
H.G. Callaway
Philadelphia, PA
Dear Nagata,
What you say in reply directly above made me think of a quotation from Newton's Principia, from the start of book 3. Its the fourth of the four rules of reasoning which he provides to the reader:
In experimental philosophy, propositions gathered from phenomena by induction should be considered either exactly or very nearly true notwithstanding any contrary hypotheses, until yet other phenomena make such propositions either more exact or liable to exceptions.
This rule should be followed so that arguments based on induction may not be nullified by hypotheses.
---end quotation
Here Newton is doing his own philosophy of science, more or less, and though we might question elements of what he says, the clear intent is that something established by experiment should not be put into doubt on the basis of the mere availability of contrary speculative hypothesis. As I read you when you say, "more speculations in novel theoretical schemes must be kept so long as we could test the assumptions," this does not disagree (on a sympathetic reading) with what I take from Newton. But consider your usage of the word "could" in the phrase "so long as we could test the assumptions." How strong or weak must this "could" be --to retain the agreement with Newton?
A very strong reading of your "could" might suggest that an experimental test is ready at hand. A very weak reading may suggest alternatively, that we might happen upon a means of experimental testing at some point or other. (Perhaps we merely hope for a test.) At some point in the weakening, I think, we cross the boundary which Newton meant to mark--leastwise if better established results and theory are thereby put into question.
Hypothesis or "assumption" of theory, may amount to objectionable speculation at some point, when the means of testing becomes very remote and the concern with hypothesis is very deep and elaborate.
I wonder what you may think of the quotation from Newton, and his argument against "hypothesis."
H.G. Callaway
Philadelphia, PA
Dear Gruner,
Thanks for your suggestion. I see no reason to dispute the idea that Quantum theory is "a theory about probabilities of possible future measurements." It is also a theory encompassing past actual measurements as evidence. Perhaps you'd like to say more.
I believe that the question has been answered in the previous discussion. If you are putting forth another idea, please make it explicit.
H.G. Callaway
It tells us there at limits beyond which we cannot go, but also a wide region between limits where we can learn and grow.
Philadelphia, PA
Dear Decker,
Interesting that you should show up with a reply to this old question just now.
I think that if you review the prior answers, you will find grounds to reject the idea that something unknowable is implies by the uncertainty principle--though it has, sometimes been so interpreted. It is rather a matter of there not being something to know, where this had before been supposed --in determinist approaches to nature and physics.
H.G. Callaway
Dear H.G.:-) I think it is exactly the opposite: the non-deterministic
quantum approach to physics differs from the deterministic and non-deterministic classical approach to physics in that it is quantum!
Philadelphia, PA
Dear Hanckowiak & readers,
It strikes me that your point is rather unclear. "Quantum" is typically a noun, though you use it as an adjective.
The American Heritage, Science Dictionary, says:
Quantum, pl. quanta: A discrete, indivisible manifestation of a physical property, such as a force or angular momentum, Some quanta take the form of elementary particles. ...
---End quotation
You wrote:
[the] quantum approach to physics differs from the deterministic and non-deterministic classical approach to physics in that it is quantum!
No doubt, the "quantum approach to physics...is quantum." Who could disagree with this? But, apparently you attach some special meaning to the word quantum which you think of as clarifying the question on this thread.
I don't see that you have done anything of the sort.Your comment leaves behind something of a bald assertion: "No, you're wrong!" --and a mystery as to why you think so and what else you mean to claim.
Perhaps you can clarify?
H.G. Callaway
Dear Callaway! Thank you very much for your attention
My understanding of the word 'quantum', I think that accepted by most physicists, comes from the fact that QM contains not only the lack of a precise description of the system, but this lack is given in a special way, which is expressed by the use of probability amplitudes instead of the probabilities themselves
Jerzy Hanckowiak
Philadelphia, PA
Dear Hanckowiak & readers,
Thanks for your clarification on the meaning you attach to "quantum."
Now, as it seems to me, you may still owe readers an explanation of why the meaning you attach, that "QM contains not only the lack of a precise description of the system, but this lack is given in a special way, which is expressed by the use of probability amplitudes instead of the probabilities themselves," leads you to claim that, as you put it, the
quantum approach to physics differs from the deterministic and non-deterministic classical approach to physics.
---End quotation
Of particular interest, I would think, is the connection you appear to see between the use of "probability amplitudes" and the rejection of indeterminacy? I suspect we will find that answers will turn attention back to versions of the "measurement problem"?
What is your argument against quantum indeterminacy? As I see the matter, this is virtually the same as the uncertainty principle.
H.G.Callaway
Dear H.G. Callaway
The classical description of the system, even in the case of incomplete knowledge of the initial or boundary conditions, is characterized by the fact that it can be freely improved in any way by clarifying these conditions. This is something we can not do in the case of a quantum description, which the mathematical expression is the Heisenberg uncertainty principle. In the classic description, we deal with Newton's principle, which speaks of the independence of the system's dynamics from the initial conditions. In the quantum description, in my opinion, the Newton's rule is not applicable, or it is weakened, which is expressed in the fact that the first derivative from the wave function (probability amplitude) is proportional to the Hamiltonian operator acting on the wave function at the initial moment.
Philadelphia, PA
Dear Hanckowiak,
It seems that you describe quantum indeterminacy. Right?
I don't think the Newtonian concept of determinacy is quite relevant. In consequence, I don't see where you think we disagree.
H.G. Callaway
Dear H.G. Callaway
Right!
I believe that my objections came from the fact that I thought that in one of your statements one can find that indeterminacy in QM is similar to indeterminacy of classical physics with random variables. Unfortunately, I can not find this statement and I suspect myself that I invented it for what I apologize in advance. My frustration is stronger because in the statement of t'Hooft in RG I found this sentence:
"It is therefore remarkable that very simple models do exist in which quantum mechanics can be interpreted exactly as if it \emph{does} describe a single, definite world, a world in which the only uncertainties are due to our inability to determine the exact initial conditions. By studying these models, one (re-)discovers loopholes that put the usual objections against `hidden variable theories' in doubt."
Jerzy Hanckowiak
Dear Hanckowiak and Callaway,
According to Bohr quantum mechanics does not describe an observer-independent world, but it just is representing (in a certain mathematical language) our knowledge obtained by carrying out certain measurememts.
I think Bohr was right on this issue. He often compared quantum mechanics with thermodynamics, which is about the temperatures we read off from the measurement scales of our thermometers.
Philadelphia, PA
Dear de Muynck & readers,
I think that the term "observer independent" is somewhat loaded with too much carry-over from the early Copenhagen interpretation. After all, what would the substitute be, if we think of observations and measurements as involving a sub-class of physical interactions --in the style of decoherence theories?
The comparison with thermodynamics and the measurement of temperatures will still hold up--as I understand the matter. There is a somewhat similar "measurement problem."
H.G. Callaway
Dear Christian,
It is well-known since Ballentine''s 1970 paper (Rev. Mod. Phys. 42, 358, 1970) that Heisenberg mistakenly considered his inequality as referring to his uncertainty principle, the former referring to preparation (i.e.the wave function) , the latter (in the first place) to measurement (i.e. the observables).
Dear Admirators of QM!
Let me quote a part of the article from Cosmos Magazine on 6 March 2018 discussing the interesting work of K. Batygin:
“Batygin then started refining the model, realising that he could portray any astrophysical system as a centre surrounded by ever more numerous, but ever thinner, wires until, inevitably, the wires blended into a single plane.
“Eventually, you can approximate the number of wires in the disk to be infinite, which allows you to mathematically blur them together into a continuum,” he says. “When I did this, astonishingly, the Schrödinger equation emerged in my calculations.”
This was a surprise, because the equation was thought to be only applicable to phenomena occurring on a quantum scale. It is used to describe one of the most bizarre aspects of quantum mechanics – the way in which subatomic particles behave simultaneously like particles and waves, a condition known as “wave-particle duality”.
Hm!
Jerzy Hanckowiak
Philadelphia, PA
Dear Baumgarten & readers,
So, we are back to the Fourier transformation. But it is surely no mere "mathematical fact" that this finds application in QM. The interesting mathematics is instead selected to express the ascertained facts of the matter, i.e., the Heisenberg uncertainty or indeterminacy. Asking "why FT?" instead of "Why HUP?" seems to put the mathematical cart before the physical horse.
Properly interpreted, as I have long argued, what the Heisenberg principle tells us, taken at face value, is that there is nothing to be discovered about the relationship between the probabilities of particular measurements and the actual measurements made: There is a purely random element. It is more a matter of quantum indeterminacy than uncertainty. This is exactly why QM proved so troublesome to the deterministic tradition in physics --and troublesome to Einstein in particular. (Recall, "God does not play dice with the universe.") Interestingly, Schrodinger (along with many others) shared the same doubts. But the long drawn-out efforts to squeeze and cut QM into a the Procrustean bed of determinism basically goes nowhere.
The point could be discussed, again, and at length, I am sure; but I think we will eventually reach the same conclusion.
Have a look at the following video, which I think is pretty good--though at times a bit dense.
https://www.youtube.com/watch?v=izqaWyZsEtY
"Understanding the Uncertainty Principle with Quantum Fourier Series | Space Time" The chief theme runs for about 11.5 Min.
H.G. Callaway
Philadelphia, PA
Dear Baumgarten & readers,
What? no defense of your positions, taken so recently?
H.G. Callaway