While scientific cosmology rarely occurs in the work Karl Popper, nevertheless it is a subject that interested him. The problem now is whether falsifiability criterion can be used for cosmology theories.
For instance, there are certain issues in cosmology which have never been refuted, but instead the same methods are used over and over despite their lack of observational support, for instance mutliverse idea (often used in string theory) and also Wheeler DeWitt equation (often used in quantum cosmology).
So do you think that Popperian falsifiability can be applied to cosmology science too? Your comments are welcome.
I believe in this criterion. Useful theories must be falsifiable. Another more practical criterion is whether theory works or not. If it works it generates new ideas, new insights, possibly testable predictions, etc… The time scale in which testing becomes possible can be of course long.
Here are some articles about tests
of cosmic theories. Falsifiability is
a must.
http://phys.org/news/2010-10-holometer-universe-hologram.html
http://journals.aps.org/prd/abstract/10.1103/PhysRevD.85.083516
http://www.sciencedaily.com/releases/2010/09/100901091938.htm
http://phys.org/news88786651.html
http://www.popsci.com/science/article/2010-09/researchers-figure-out-how-test-untestable-theory-everything
Regards,
Joachim
Dear Prof. Pitkanen and Prof. Pimiskern, thank you for your answers. One of the problem with your more practical criterion i.e, to see if a theory works or not, is that rarely there is a theory that works for all observational data in question. There are theories which are beautiful and consistent but cannot explain any data, and there are theories which are empirical but able to explain many data. So perhaps it takes time to find out a theory which works for 100% of observation. Therefore, if we follow falsifiability, then perhaps there will be partial refutation (50% refuted or more) of certain theories. What is your opinion? Thanks
Victor,
There are three main concepts that I consider for proper science. The scientific method, falsifiability and Occam’s razor. It is important to consider all of these in the following scenarios:
1. The scientific method provides a framework to produce viable theories. One must construct a hypothesis, test through experimentation or observation and then either communicate the results or revise. However, this framework allows for a theory to be continuously revised through non-classical means in order to fit observations. This introduces complexity that more often lacks any understanding of the underlying physics.
2. If a theory is allowed to be revised without end, can it be considered falsifiable? For example, say a cosmological model fails to fit distance modulus versus redshift. To fix this problem several non-classical variables are included in the equations. After somewhat resolving this issue, it is found that angular diameter distances and the volume element are in disagreement with the model. So now theorist must propose a local hole that is in itself incompatible with the theory. Then it is found that the CMB is at odds with predictions, which requires a plethora of other non-classical additions. Galaxy counts don’t fit predictions either, which requires the inclusion of “disappearing galaxies” and several other ad-hoc proposals. At this point, every basic observation requires some ad-hoc assumption to fit a particular cosmological model, is this real science?
3. We see that many of the recent cosmological proposals in scenario 2 are indeed ad-hoc and lack falsifiability in some regard. Nonetheless, this is where Occam’s razor comes in. With the problems discussed in scenario 2, it is clear that one can get almost any theory to match observations by introducing enough non-classical variables and assumptions. The more difficult but proper task is finding the simplest theory that matches all observations. Furthermore, theories will have a mixture of falsifiable and unfalsifiable aspects depending upon overall uncertainty and underlying assumptions. It is therefore important to focus on aspects that can be falsified or properly compared between models.
In conclusion, there are two types of falsifiability. One can make a theory unfalsifiable by not looking for simpler alternatives and continuously introducing ad-hoc components to make a hypothesis agree with observations. Then there is falsifiability in terms of pseudo-science, i.e. the theory makes no predictions and therefore cannot be tested (either verified of falsified).
Dear Michael, thank you for your answer. Long time no see.
Btw, to Michael and Prof. Pitkanen and Prof. Piniskern, i have just completed a new book which is to appear soon. The book discusses some philosophical questions like this in the context of cosmology and science in general. If you think you would like to write a short review of this book, please let me know and i will send you my book in pdf to your email. Thanks.
Its wrong that Popper has not answered the Duhem-Quine thesis. AFAIR it is answered in conjectures and refutations.
The answer is, of course, not a complete refutation. Popper accepts that what is tested in a single experiment is never a single hypothesis, but a whole set of hypotheses and theories. So, a single falsification will not refute a single hypothesis, but leave a lot of freedom which of the involved theories has to be modified.
But Popper also looked at this from the other point - there are always a lot of different observations, with different designs, different measurement devices, there are also a lot of ways to test separately some parts. In particular, measurement devices can and will be checked independently by measuring a lot of things. We can do this with devices of the same construction, and obtain in this way reasonable bounds for their accuracy. And even if, in principle, each of these tests depends on a lot of other assumptions, these other assumptions are usually not very problematic and quite different from those used in the critical experiments.
So, Popper's answer is something like a "yes, in principle, but irrelevant in reality".
Clifford, a strange position. So, if you claim that 1+1=2 and aliens control our world, and I do not refute 1+1=2, I have failed to answer?
Dear Ilja, Clifford, Michael and others. Thank you for your answers. And what about the adage that many scientists use: any spectacular claims should be supported by spectacular proof/data? So it seems to me, once a way of though becomes an accepted paradigm by many scientitsts, then it will be harder and harder to refute or falsify it, partly because of that adage. That is called "scientific program" by Imre Lakatos. So, a scientific program cannot be falsified because many more people are working on it.
Victor,
No problem, I’ve been busy reformatting the preprint for my latest article and haven’t had much time lately. I think this is a good question in regards to modern cosmology, as even inflation has been brought into attention in terms of falsifiability. For example, the theory of inflation is very tunable and will essentially fit any theory that predicts a cosmic background radiation. Thus it is testable solely on the premise that the cosmic background radiation exists, yet unfalsifiable because the nature of the CMB could deviate from theory.
Relative to the Duhem-Quine thesis, this is basically what I had mentioned in regards to constant revision of a failed theory. The argument says that any hypothesis can be saved by revising auxiliary hypothesis (and thus be made unfalsifiable). However, this is not always the case and the Duhem-Quine thesis is fundamentally flawed for a single reason; it assumes that there are no facts. I’ll provide a concrete example with respect to my recently published article on cosmology.
The big bang theory states that everything originated from a singularity, which erupted into “space-time” inducing metric expansion. Now, there are many variations of this including dark energy and dark matter; however, my central focus is on the hypothesis of metric expansion because the entire big bang theory fails without it. The hypothesis of metric expansion makes a number of predictions such as redshift versus luminosity distance, redshift versus angular size and redshift versus volume element. You can see from my recently published article that the big bang theory fails a large amount of well-constrained tests with respect to the later two (these problems have persisted since the 1960's). In fact, LCDM is conclusively ruled out in its current form.
Now, angular size and the volume element are functions of the rate of expansion, which is constrained by SNIa. Thus there is absolutely no wiggle room for the big bang theory in these regards. Metric expansion makes very precise predictions due to geometry. This is also why over-reliance on confirmation rather than rigorous attempts at refutation is known as pseudoscience, because it only takes one firm observation or fact to rule out a hypothesis. In fact, false theories usually begin to fall apart when one considers all observations together and their influence on one another.
Thank you Michael, for your answer. Yes, there is chance for your global gravitational potential. I have asked to Dr. Volodymyr Krasnoholovets, and he agrees that such a global gravitational potential is possible. Best wishes
Thanks, Charles, for your answer. Yes i think that Popperian falsifiability is somehow limited. It is a good tool in science, but it is not enough. Perhaps we can include Bayesian acceptance too? Best wishes
Charles, it is clearly wrong that one can derive physical laws by deductive reasoning, except if one starts with nontrivial and logically even stronger physical axioms. To obtain, simply by definition, such nontrivial things like Lorentz invariance is nonsense.
The hypothesis of a finite maximum speed is clearly not sufficient. You also need some sort of relativity or equivalence principle or so, without this a Lorentz ether, with a detectable rest frame but nonetheless a maximum speed of information transfer, would be possible.
Then, Hume was of course good for pointing out the problem of induction. But he does not have a solution. One may not like Popper's solution (I like it), but it at least gives a reasonable scientific method, which those who hope for deduction from some first principle have been unable to deliver.
Of course, Popper is also not without faults, his main fault (if we forget about political theory) was the rejection of Bayesian probability theory.
Charles and Ilja,
You both have some good points, although there are complications in regards to what qualifies as falsifiable. For example, the equivalence principle can be tested through various means in both the local and distant universe. Prior to having the capability to run such strict tests, the equivalence principle would have been a hypothesis based upon parsimony or Occam’s razor. There were several proposals as to testing the equivalence principle, which have held up over time. Thus at one point the theory was falsifiable, but then passed various experiments and observations.
This is rather significant because there are usually a finite amount of tests available for an individual hypothesis. In other words, one can start with a theory that is falsifiable, which later ends up passing all of the tests. At such point, would one consider the hypothesis to be a fact and thus immune to falsifiability?
In a perfect scientific world, deciding upon the superior theory would require the following,
1. Various theories and possibilities are examined with respect to current observations.
2. Each theories performance is compared, with any failed predictions rejecting that particular model.
3. Once the theories are sorted in terms of best performance and agreement with all observations, then one must find the simplest model. For example, say that theory 1 initially had serious issues and required the introduction of several non-classical aspects for agreement with observations. Theory 2 however had already resolved these issues without any non-classical aspects or additional variables. At this stage one must apply Occam’s razor and choose the simplest theory as the correct theory.
It is important to note that the scientific method, falsifiability and parsimony are incomplete without one another. We need the first to provide structure to hypothesis and ensure they develop. We need falsifiability to test hypothesis, which is part of the scientific method. We need parsimony when several competing hypothesis are available and become unfalsifiable (have passed all imaginable tests).
@Charles. You wrote: "modern statistical analyses are already bayesian." What i meant before is not just using Bayesian as statistical tools only, but to use Bayesian epistemology to improve Popperian falsifiability. I submit the view that Bayesian epistemology could be a better and rigorous approach in science, see for instance Stephan Hartmann and Jan Dprenger (2010): http://stephanhartmann.org/HartmannSprenger_BayesEpis.pdf. What is your opinion.? Thanks
Charles, no, Humes principle of uniformity does not contain principles of relativity. These may be special instances of Hume, but a general principle of uniformity is much more general and compatible with completely non-relativistic theories. So, there is no chance to derive relativity principles from Hume. BTW, some abstract principle of uniformity is a triviality which gives nothing - human ability to formulate theories is so restricted in comparison with the universe that every human theory about the universe has to be extremely uniform.
Moreover, this principle solves nothing. If nature is uniform or not does not change the fact that no number of non-observations of black swans can prove that no black swans exist. And it is hopeless: Or your principle is strong enough to derive, from observations, that no black swans exist - then you can derive from observations falsities - or you cannot, then the problem of induction is not solved.
By the way, principles do not have to be falsifiable, only scientific theories have to be.
Victor Christianto,
You say "The problem now is whether falsifiability criterion can be used for cosmology theories."
I think that for a theory to be A SCIENTIFIC THEORY, it has to be based on the scientific methods that must obey the Karl Popper falsifiable criterion. If a theory is not falsifiable then is not scientific: may be such a theory may have a very good value as a philosophycal approach to nature.
As far as cosmology is concerned, its birth as a scientific theory occured when the general relativityy theory, a falsifiable one, had 3 succesfull experimental confirmations in the 1920´s: the deflection of light in the gravitational field of the sun, the precession of he perihelion of Mercury and the gravitational red shift. This were the first experimental tests of the general theory of relativity predictions, a falsifiable theory.
Michael, first what is falsifiable remains falsifiable. We can never be certain that even those experiments which have corroborated the theory wil, if repeated, give the same results. Moreover, there will be different tests, with different (better) devices.
Then, your suggested method miserably fails because it ignores Popper's criterion of empirical content. So, the theory "God's decisions are unexplainable" wins. Even if one thinks that falsifiability is part of your method, but only forgotten - Popper's criterion is something more powerful, it compares the degree of falsifiability of different theories, and prefers the theory which makes more falsifiable predictions.
Charles, I agree, it makes no sense to discuss with somebody who thinks that Hume has not only posed the problem of induction but really solved it himself.
Antonio, of course cosmology was scientific even before, starting (if one ignores the old Greeks and Egypts) at least with Kepler.
And, then, I would like to argue that unfalsifiable theories may be starting points for the development of falsifiable theories. The same holds for different interpretations: Their differences may be unobservable. But they define different starting points for development of future theories.
The point is that different interpretations may have different weak points. A weak point, as a problem for this interpretation, may be solved by modification of the theory itself, leading to different predictions. In another interpretation, this 'weak point' is not weak at all, thus, nobody tries to solve this problem.
Two examples:
In the de Broglie-Bohm interpretation, near the zeros of the wave function we have a rather harmless infinity in the velocity (like an idealized hurrican). It would require, if taken seriously, a regularization (like the eye of the hurrican). This regularization would lead to a different prediction - zeros of the wave function would not be exact zeros of probability in the regularized theory.
In my ether interpretation, it appears that there should exist a global time-like coordinate. The equation do not give a guarantee that this remains so (even those modified equations of my theory). So, without the ether interpretation, some closed causal loops could be created. The ether interpretation disagrees - it appears that in this case, before the causal loop appears, the ether density becomes negative. The ether interpretation suggests a clear and simple modification: The equations are valid, as condensed matter equations, only for density greater zero. If it becomes zero, the condensed matter approximation is no longer applicable. One needs another, more fundamental theory (atomic ether?) for these situations.
In above cases, it is the interpretation which points out at which places one has to look for something critical, which could falsify the known theory: Near the zeros of the wave function, and near situations where closed causal loops may be created.
Ilja,
For example, the deflection of light due to a gravitational field was once falsifiable, but is no longer because it became a fact via direct confirmation. The same thing can be said of redshift due to relative motion or gravitational potential, i.e. it is a fact. I would agree that Einstein’s field equations are still falsifiable if gravitational waves or event horizon could be ruled out. However, it isn’t true that every theory can remain falsifiable and at some point one will run out of tests for any hypothesis. The examples I provided are relatively simple. In the case of a cosmological-scale gravitational potential or LCDM, there are a substantial amount of tests but still finite. Therefore, the correct cosmological model will either pass all tests or be excluded in its current form. After revision and proper fit to all observations, all models can be compared on the grounds of the simplest theory (Occam's razor).
I do not see how my method fails Popper's criterion of empirical content. From “The empirical content of a statement increases with its degree of falsifiability: the more a statement forbids, the more it says about the world of experience”, what I take away is that popper refers to the amount of predictions a theory can make. However, once a hypothesis is proven it can no longer be falsifiable, yet the theory retains its predictions. So what I’m saying is that everything is explainable in a very physical manner, based upon the simplest theory in agreement with all observations. If Popper was instead referring to all theories retaining their ability to be falsified by later observations, then I would say that this is false based upon my two examples.
Ilja Schmelzer.
You say: "Antonio, of course cosmology was scientific even before, starting (if one ignores the old Greeks and Egypts) at least with Kepler."
No. Kepler dealt with the solar system: the three Kepler´s laws relate to the solar planets, only the solar system. He was a very good observer, and he did a very good job, incredible, with numerical calculations. The "seeable" universe may have about 10^22 star systems, similar to our own. Kepler did a very good job indeed (as an astronomer), and his work was taken by Newton to develop his laws: specially the gravitational law. We are here talking COSMOLOGY. No scientific theory of the universe was available before the Einstein cosmological equations, very well developped by many scientist, like Friedmann.
I think the concept "theory" is vague; when an idea is thoughtfully proposed associated in actuality is the world view of its author. In modern day world views have accumulated to them a great deal of inertia, carry considerable mass that is accumulated from a vast number of individual experiences with the world that have produced philosophy, science and culture. One not find an idea in modern times paralleling, for instance, :"the earth is flat", is falsifiable. I do not believe that the world view expressed by Einstein exactly parallels aspects of relativity theory that he was not so certain of. He did predict that the path of light is bent by mass. but it is not excluded that straight lines do not exist in nature, all light might follow a curved path: if so it would prove impossible to clearly falsify Einsteins prediction; with the ratio (relative amount of bend) as an added aspect only support is potentially acquired, the needed control situation, a total vacuum, a total vacuum able to transmit light may not exist, light might not conduct itself without gravity. In parallel, negative selection processes in evolution may not exist, witnessed natural emergence accomplished singularly from the avoidance, at all organizing levels, of closing, dying, space. The real test is upon world views, are unlikely to be falsifiable. The notion of falsifiability does seem to create the sense of a safe shore for footing, but essentially I think the challenge is upon the imagination in the evolution of world views and logically coherent philosophies..
If interested I have linked "Accounting for the World: Symmetry and the creative cognition."
Michael, already Popper has pointet out that some, also scientific, statements may be unfalsifiable but verifiable - namely existence claims.
Your example - the existence of some (means, non-zero) deflection/redshift/time dilation is, if not exactly an existence statement, but in essence something similar: There exists some redshift/deflection/time dilation. But, more essentially, it is something completely different from what GR really predicts - because GR predicts some very special numbers, which can be computed (even if only approximately). So redshift/deflection/time dilation always has to have the same value as the one predicted by GR. And these particular predictions of numbers remain falsifiable.
Predicting in this sense more numbers means more predictive power, more empirical content. Of course, you should, at first, recognize "this is nonzero" is less empirical content than predicting a particular value for the same thing.
to add: CORRECTION "in parallel, negative selection processes in evolution may not exist, witnessed natural emergence accomplished singularly from the avoidance, at all organizing levels, of closing, dying, space."
should read "natural emergence accomplished singularly from the pursuit , all levels of organization, of open spaces rather than from the avoidance of closing, dying, space."
@Ilja Re: "But, more essentially, it is something completely different from what GR really predicts - because GR predicts some very special numbers, which can be computed (even if only approximately). So redshift/deflection/time dilation always has to have the same value as the one predicted by GR. And these particular predictions of numbers remain falsifiable."
Is it not possible that interpretation of red shift violates uncertainty principle (knowing of position and velocity simulatnaeously). Rather than complying with empirical data which seems to add coherently from all angles, is it not possible that red shift = delta velocity refers to other than the relative position of earths perspective to the object in questionn but to another object that might parallel earths perspective in relation to the 'red-shifting' object.
In support of this claim:
1) there is suspiciously nothing in the night sky at a suitable position that give us a better perspective on our bearing with respect to the rest.
2) in particular, is the Earth/sun part of a twin pair?
This leaves us in an open situation, to either insist on strict uncertainty and simpler interpretation, or perenially blind to possibility that measurement and theory refer to a duplicate situation. On the positive side, once explored, there may be benefits to observing uncertainly always around the corner rather than straight on. Perhaps of occurences, this may be all that might, in its simplest meaning and implications concerning our inquiry, concern us Einstein expressed in quoted remarks that nature may not 'feel' all the time but was not mailicious (to leave us in total blindness). Perhaps we are being malicious with ourselves?
Marvin, of course GR violates the uncertainty principle, because it is a classical theory, and all classical theories make definite deterministic predictions, thus, violate the uncertainty principle, which is a quantum principle, thus, holds only in quantum theories.
Ilja,
It depends on how Popper felt about these issues (does anyone have references to his work?). If he agreed with me that at some point a theory can be fully verified and thus unfalsifiable (but still acceptable), then I think he has the correct approach. For example, if one were to build a hypothesis from solved existence statements, then there are no auxiliary hypotheses relative to the Duhem-Quine thesis. Essentially what this means is that there are a finite amount of tests that can be conducted on a finite set of variables. However, modern cosmology and astrophysics have many auxiliary hypotheses, allowing an infinite amount of variables via revision. Obviously the more auxiliary hypotheses a theory has, the more unfalsifiable it becomes. This occurs through several mechanisms; (1) the theory becomes more tunable and can therefore fit more scenarios -> false positives, (2) the auxiliary hypotheses cannot be tested on fundamental grounds and are therefore unfalsifiable. Occam’s razor naturally supports the theory that requires no auxiliary hypothesis (or fewer) for this reason; i.e. once there are competing theories, the simplest theory is the correct one.
Testing something like the difference between various models of general relativity is more difficult due to the required precision. Each model would be part of the set of possibilities that could fit the existence statement(s) through experimental results. In the case of gravitational potential and redshift, these constrain some aspects of curvature and the space-time metric through existence statements that have been solved. Gravitational waves and event horizon however have not been directly confirmed yet, which will provide even better constraints on viable theories of general relativity. If gravitational waves for example are not detected in the next couple of years, there will be serious consequences for mainstream general relativity and Einstein’s field equations.
to add: There is probably some truth to either view i.e. that somethings are more likely to occur-e.g. classical interpretation/falsifiability and uncertainty/quantum mechanical view-e.g. almost anything goes, a broader range of possibility (it is not impossible the earth/sun and its proposed twin might change places-would account for the observed change in magnetic poles). From a combined consideration of the two possibilities suggested for interpretation of the red shift it might be perceived that from habit to apply coherent appearing, tested explanations it is not impossible that we are actively directing ourselves into a firing-range/logistical quagmire, and real ballistics from all angles and proportions-it appears odd and questioning to me that the human situation, currently more and more explosive and predicted headed towards (a brief) World War III, does not arive from an external situation existing around a corner- a nightmare being realized as an effect of an eccentric perspective involving impulsive/compulsive fixation on the acquisition of proof ; seems one day to the next we find observations that are different from the previous. As in all these situations (human aggressive activities/ human impulsive/compulsive requirements for proof/cornering leading to confrontation) loses can be predicted to be greater, survivors fewer, the longer they endure.
@Michael I think Einstein was lead to proposed the general theory from observation suggesting the existence of a gravitationless state through which light penetrated without bending. He raised question, expressed some doubt about its' validity-i.e. 'were reconsidered after observation?'
Popper in addition to his proposed verification method by 'verisimilitude' or truthfull appearance, introduced the topic of ideology as it applied to science. I think it was he who referred to scientific laboratories as ideological units seeking to advance their own positions. In philosophical circles it is pretty much agreed that there is a directing association of the political upon scientific endeavors, what questions are pursued and funded. Science is very political in the sense that by referring consistently away from the self, like politicians, ideas and pursuits necessarily refer back to the self.
Einsteins relativity can be very confusing as it refers away from 'straightness' to curves, comes close to putting definition on path as it has both scientific and historical meaning, though Einstein could never capture the human mind to it; when it came to discussion of an expanding universe that obviously also contains the human mind, he dissented with obstruction factors as he had no tangible math to express his ideas.
@Arno. Thanks for taking up Cantor's approach, i read elsewhere that Menger sponge has fractal dimension of 2.73 which is exactly the same with CMBTR temperature. Does it have correlation to a new type of fractal cosmology?
@Marvin. You wrote about science as ideology, that reminds me to a book by Jurgen Habermas that i read 22 years ago: Science and technology as ideology. Furthermore, in philosophy there is a branch called Sociology of science, see for instance Massimiano Bucchi's book with that title. Perhaps you would like to add references on this subject? Thanks
Sorry for the typo, i mean CMBR temperature. CMB stands for cosmic microwave background. Best wishes
@Michael CORRECTION I have mixed Popper with Kuhn (reported to have been in debate over the issue of verifyability though at a distance without direct confrontation). It is Kuhn who is the proponent of "verisimilitude" and makes analogy of science and ideology. I think within the struggle, contemporily 'falsifiability' raises a lot of issues as insight has grown on problems.
It is very common, natural, to construct the universe in a manner that is separate from but accomodates life and human life. Physicists, both Eastern and Western, sometimes view the world as a mathematical construct in which man is a minor and dispensible component. However, to do so requires an extrapolation that exceeds the requirement of direct witness from the first perspective. The theory of relativity has great appeal this way with the twin paradox in which an association between the effective age of identical twins and speed of travel (near light speeds) is conjectured. Philosophically I have two major objections, to relativity 1) light speeds cannot be obtained, thus theory exceeds facets inherent the real world 2) mass, taken as both concept and measured amount, moves with the light ray, a concept cannot have a location, move from one place to the other where it, in the case of mass, operates on other mass to produce gravity. Light, I assume syntaxically, refers to mean"without weight" is given a light weight under most but not all conditions. I do not think that, as in Occams Razor, this is the most simplest and irreducable view.
Sociologically, in this frame, God becomes a mathematician and the world he created a formula that holds men unaccountable for their actions, .i.e. the laws of the universe place no constrictions upon them, behavior can come from cosmologies in which physical laws can be broken. The laws we have assembled do not look to me suitable to be the real laws of nature....the problem of the "moving concept"=mass I think captures the whole dilemma....cosmology has two aspects to consider 1) everywhere and 2) somewhere e.g. what is universal or locationless and what has location. Feyerbond, a physicist, made an analogy of political anarchy and the statistical behavior of gas in a volume in reference to the lack of freedom of expression some physicists are given for their ideas. I think his notions are evolved from the current cosmological model employed in which it can be visualized that it is specific concepts, like gas atoms or molecules, that are statistically distributed in space in an anarchy or free state. In contrast it is the first perspective that is distributed randomly in space in which concepts possessed to them can suffer oppression, not have freedom, depending on the particulars of the space they are contained to. In this sense the real fact exists that human activity can define the characterists of occupied spaces...if so, implied are absolutes that descriptively capture freedom, lack of oppression, of the concept for all spaces. Cosmologies, as they must evolve from the mind must be unwitnessible if they also contain it, are inherently unprovable by fact taken from measurement. Einsteins space (better, space-time that follows) is almost a terrible tyranny from this perspective in which the creating power of God as a mathematician, mysteriously governing differentially different locations, can be likened to humanly conceived political states that govern jurisdictions to effect force on opposition....the tiny mass given to light might prove an irrestible gravity in one setting and not another depending on the density and positioning of mass. Entailed is the possibility that the space defining activities of men, of which alternative possibilities exist, lean directly, voluntarily, with a force, other than universal, with a particular that is specific to the containing space of earth.
Michael, of course Popper does not follow your position, his logic of scientific discovery is very clear about it.
And there was, by the way, a theory which was fully supported by all observational evidence, thus, "fully verified" if such a notion makes sense, but appeared to be false. This example is Newtonian theory. Just to clarify how strong the support was: It has not only given Keplers laws and unified gravitational attraction on Earth's surface with the movement of planets. There have been computations of the influence of other planets on the movement of planets. And these predictions have had an overwhelming success in predicting all the perihelion shifts of all the planetes. (Except for Mercury - an exception which could have been explained with dark matter of that time, a planet named "Vulcano", very near to the Sun.)
That it is possible to save theories with auxiliary hypotheses is something Popper has recognized very well - but Popper's empirical content is a countermeasure against this. It was also recognized by Popper that his criterion of empirical content can be used to derive a variant of Ockham's razor. Thus, from Popper's point of view his own criterion is more fundamental, and Ockham's razor only derived.
Then, there are no "various models" of GR. GR is a single theory. (The only "variant" is the cosmological constant, but even here the simplest variant is simply the special value Lambda=0.)
Here is a post that I found on Kuhn vs. Popper ( http://www.wheatandtares.org/2665/kuhn-vs-popper-kuhns-challenge-to-popper/ ). So it appears that Popper believed that verification does not exist, i.e. that only falsification was possible.
"Therefore the real difference between Popper and Kuhn (in Kuhn’s mind anyhow) is that Popper disbelieves in 'verification' and Kuhn points out that 'falsification' only takes place once a crisis is reached and a revolution is underway. At this point we are always comparing theories and explanations. Therefore 'falsification' through observation is identical to 'acceptance through verification.' Popper is thus wrong on one point. Verification does, in a certain sense, exist."
Mainstream cosmology is without a doubt in crisis mode; so under Kuhn, falsifiability can and should be applied. Nonetheless, there are flaws in most of these perspectives. Popper is wrong because at some point a theory can be unfalsifiable or proven (verified). The Duhem-Quine thesis is wrong because one can construct a theory from solved existence statements, i.e. it has no auxiliary hypotheses. Kuhn is wrong because one cannot over rely on confirmation (verification) instead of refutation (falsification). I think the only individual who is 100% correct in their view is Ockham. Although I dislike referencing or quoting from wiki, I think this sums it up well.
"For each accepted explanation of a phenomenon, there is always an infinite number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypothesis to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are better testable and falsifiable.[1][10][11]" http://en.wikipedia.org/wiki/Occam's_razor
In regards to Newtonian mechanics, general relativity is meant to reduce to the Newtonian limit under weak field conditions (for example, a single particle). However, the old framework lacked time-dilation and the additional deflection of geodesics due to mass. Although Newton's theory was simpler, Einstein's field equations fit more observations. However, there are many theories of general relativity. My own framework uses classical physics and the equivalence principle to produce per particle solutions. For example, I simply make each particle have a 1/r gravitational potential relative to the space-time metric induced by all other particles in existence, i.e. any experiment will be independent of reference frame. This is relatively simple because the 1/r potential for each particle places constraints on the space-time metric and curvature. Oddly enough, event horizon and gravitational waves are no longer present in such model. Furthermore, the weak field limit is equivalent to Einstein's field equations. Therefore, GR is not a single theory, it has experimentally confirmed existence statements and various alternatives in terms of field equations and predictions.
About Kuhn vs. Popper: Popper's work has the title "the logic of scientific discovery". It handles the logical problems - in comparison with the available alternatives, namely positivism, in a completely satisfactory way.
Kuhn considers history of science - and how scientists behave. Here, the very problem is a different one. It is in fact not the logical one - is the existing theory true or false. Here, it is usually quite obvious that it is only an approximation, thus, false. The problem is what to do until a better theory has been found - with the straightforward answer that, of course, one continues to use the best available one, even if one knows it is false, at best an approximation.
And the other central question is where to look for a better theory. Here, it is natural that there are times where the available methods for problem solving seem sufficient to solve the existing problems - all one has to do is to apply them appropriately - and other times where it becomes obvious that minor modifications are not sufficient. The whole question is completely out of Popper's considerations. Popper leaves it to the scientists where to look for new theories - what is covered by his method is what to do once several such theories have been proposed.
There is also another question - the problem which Duhem and Quine have put into a nonsensical extreme - that experiments do not falsify statements or theories taken alone, but in fact groups of theories. Only taking many theories together one can really derive observable predictions. Thus, a falsification falsifies the whole group of theories taken together. It does not tell which member of this group is the wrong one. A solvable problem, because different experiments will falsify different groups of theories, thus, if one carefully looks, one can exclude the innocent theories step by step from the suspects. But it follows that the logical possibility - a single experiment falsifies a single theory which predicts, alone, an outcome different from what is observed - is not the typical case.
@Ilja and Arno. I think Kuhn's ideas of scientific changes look like a self-organized criticality (Soc). What i mean is:
First, there is crisis (critical condition)
Second, there is breakthrough or revolution (change)
Third, there is resolution or consensus (back to normal)
Alas i do not find yet any reference of scientific change as SOC. What do you think?
I see the Popper Kuhn controversy as the existence of absolutes and unified theory verses a sort of follow the nose feeling ones way around. In a sociological perspecitve Kuhn centers on the individual and perception whereas Popper view truth to be more objective than subjective. e.g. "I have an idea....?" but False says Popper, "object is square not round." The world though does not present to witness situations from which grand theory is testible for falseness.
A very positive approach seems to be evolving among researcher in the design of experiment so that grand theory is not the topic of test. This way data, that may be broader in scope, is acquired for the purpose of practical application- experimental conclusions are not based on extracted analysis burdened with complex induction and definition of non-tangibles to result with more thin air than substance in hand, I think something seeming true to perception is better than a thin air supported by complex logic. If the only guideline of nature is possibility within a given setting, does not involve succeeding progressions/ temporal/physical connections, prediction always has a margin of doubt, exceptions are always found.
It is also interesting to note that a trend is occuring in philosophy that side steps psychological aspects of cognition, purports that acquired beliefs are at the root despite wording and expression, accounted for almost strictly from the specifics of an individuals life experience; also for visual representation, e.g. "what is seen is what is present"...the everywhere world sought by science reduced to incoherency, replaced with the world of individuals that is qualified with respect to the senses "familarity as prerequisite for engagement." From this perspective even sensory hallucinations can be argued away to be superfulous to the argument. I think this is a very important advance in philosophy.
to add: the discussion of belief "acquired beliefs are ...AT THE ROOT DESPITE WORDING AND EXPRESSION..."
needs amendment...."belief" does not need to catagorized and classified. It is meant for discussion in terms of its intrinsic value alone, is contained to a persons experience and has no other meaning; at the root of all action is space that is open, volume that does not have an edge or place where it stops, hence the world is open to interpretation, belief.
Ilja,
Yes, I suppose if we view the discussion in terms of how things actually progress there are many additional aspects. I take a rather different approach compared to most in regards to forming hypothesis. Instead of taking a single observation and trying to propose a hypothesis or framework that will support it, I take ALL available observations into consideration. Then I begin to brute force the problem by thinking of every possibility that would support such observations. The best way to do this would be by starting with confirmed existence statements and only when the initial ensemble of classical explanations fails does one go to non-classical frameworks. In regards to where to look for a better theory, this would be it.
I will give an example of my own research in cosmology. I built the framework of my model on the grounds of a cosmological-scale gravitational potential, where the local deflection of geodesics into it provides the illusion of accelerated metric expansion; i.e. distant objects are accelerating back into the potential with respect to local observers (local redshift is actually more complicated than one might initially assume on this basis). Here I have a single hypothesis that is built upon facts, i.e. it requires the experimentally confirmed existence statements of geodesic deflection, gravitational/Doppler redshift and gravitational potential. Therefore, there exist no auxiliary hypothesis relative to the primary, which is the existence of a global gravitational potential.
From this I can derive redshift versus distance modulus, redshift versus angular diameter distances, redshift versus volume element, redshift versus time-dependence, large-scale B-mode polarization in the cosmic background radiation and several other predictions including local redshift anisotropy. Of course, for other predictions auxiliary hypotheses are required. Nonetheless, I have proven that one can have a theory with no auxiliary hypothesis that not only makes at least five precise and independent predictions, but in fact substantially out performs competing models that require a plethora of auxiliary hypothesis. Each prediction can further be applied to multiple tests, which you will find in my recently published article "Evidence of a Global Gravitational Potential".
Arno,
In realistic cases, I would agree that one form of verification can be trumped by a stronger or more precise form later on. However, the 2nd form of verification will not necessarily contradict the first. A community may also choose to ignore observations that do not support their hypothesis, which is why one should actively seek to falsify their own model.
Well , some 20 years back , I had burnt my fingers on The Theory of Falsifiability. I think i have some right to interfere in the discussion. The Popperian philosophy , after all is 'Good Philosophy, Bad Science'. Progress in science has always followed a nonlinear trajectory. It is evident from the devlopments in Relativity and Quantum Mechanics in 20th Century.
A good hypothesis requires bold imagination guided by intuition. For example , Einstein's Equations for General Relativity and the Mass Energy Equation , Quantum Hypothesis of Planck , De Broglie's Hypotheis , etc.
Nevertheless , I remain an ardent admirer of 'The Undending Quest' , autobiography of Karl Raimund Popper.
Pradosh, about 'Good Philosophy, Bad Science':
One should not mingle the logic of science (which was what Popper was about) with good ideas how to do science. Popper's point was how to evaluate theories proposed by scientists. How they find these theories was a question out of the logic of science. Theories are hypotheses, guesses, thus, from this part they don't need any justification. (They are not justified as results of derivations as in empiricism.)
But, of course, for real scientists the question how to find good new theories is a really important question. And there are, of course, interesting observations about this (some times, following the actual paradigm and developing science in small steps is a good idea, sometimes this method has exhausted itself and one needs some big steps, a revolution, paradigm change. That's Kuhn. Lakatos has also thought about this - resulting in his ideas about research programs. So, these "critics" of Popper have argued about something completely different.
Roughly, Popper explained the rules of the game named science (which was an important progress, given that empirism had ideas which were completely off) to philosophers of science (scientist have known them intuitively correct). Kuhn and Lakatos thought about what makes players better.
For example: A particular ether theory has been falsified. Popper explains what this means. This particular theory is wrong, and has to be replaced by another one. He remains completely silent about how to find the improved theory, even where to look for them. A modified ether theory? Possibly even a minor modification can save the game. Or does one have to reject the ether idea completely? The first would be inside the ether paradigm, the second a paradigm change. Above are allowed by the rules of science. It is even allowed to ask the grandmother what she thinks about this.
Even if we make only a minor modification to save the ether, all what is required by Popper's logic is done - the old theory is wrong, falsified, a new, better one, has been proposed, not yet falsified. The rough idea about an ether remains - but it is not a rough idea, which should be falsifiable, but particular theories.
In this sense, there is not really a controversy Popper-Kuhn.
@Marvin, Pradosh, Ilja, Michael: thank you for all your answers. Best wishes
The idea to translate "Die Logik der Forschung" as "logicof scientific discovery" was an unfortunate one, because it is not the discovery itself which is considered (it is outside of the consideration) but the evaluation of such discoveries.
Given that you continue personal attacks against Popper, without gving any substantiation, can discredit only yourself. There are many points where I disagree with Popper, political as well as philosophical, but I would never discredit myself by name-calling without justification.
Charles, nice that you return to argumentation.
Hume's "principle of uniformity" does not save the day, because even with this principle no amount of observation gives rise to "inference or conclusion". So, "science" in the understanding of Hume, with some sort of derivation of theories from observation, which would be invalidated by "any suspicion", is impossible anyway.
"Observation of principles" is nonsense. One can invent principles, and observe that, it seems, they are not violated. That's all.
Sorry, no, there is no internationally agreed way to "establish spacetime coordinates". What is internationally established are units for distance and clock time measurement, definitions of what is 1m and 1s. Which is something completely different. And it has nothing to do with Popper or me, because nor Popper nor my person has any objection against these definitions.
The remaining part of your answer is completely incomprehensible for me, because I have not made any name-calling or claims about the intellectual incompetence of scientists.
Of course, the thesis that it is logically impossible to derive anything from nothing, or from pure observations (observations are always interpreted, and theory-laden, thus, presuppose theories instead of coming from reality alone) is a quite different thing from accusations of "intellectual incompetence of scientists". (Reading such answers, drive me into the direction of such conclusions about you, because of your inability to distinguish such clearly different things, but this would be another question.)
I think that Popper's view has been used in cosmology, although there is not a direct reference to him. If we review cosmology from ancient times we will observe a step by step movement from anthropocentric to more human non related ideas: From the world that was flat and we were wondering what could be done if we could reach at the end of it, we have arrived at the multiple realizations of a universe, at local universes. This was done in thousands of years...
Charles, your "since relativity follows from the definitions, that is also not falsifiable" is worth to be memorized and quotet whenever one wants to discredit you. A really funny quote.
It raises the question if it makes sense to continue a discussion with you or to stop this as hopeless, sorry.
Just for other readers: Of course, "falsifiable" does not mean that it is the fate to be falsified, Popper would have had no problem at all with attempts to criticize his methodology, but, by the way, he has never claimed that scientific methodology is an empirical science, thus, the concept of empirical falsification is not applicable at all, and one needs the wider concept of critical rationalism to cover methodology too.
There are flaws in falsificationism just as there are flaws in logical positivism. As to the original question, yes Popperian falsifiability can be applied to cosmology depending on the circumstance. Here are some simple cases from my own research.
1. The faint blue galaxy problem (see http://adsabs.harvard.edu/full/1988MNRAS.235..827B ). There are 3-5 times more galaxies at 20 < bj < 21.5 than predicted by LCDM or the big bang theory. Through stacking spectra of the Broadhurst et al. sample, it had been demonstrated that the excess cannot be due to luminosity or color evolution. Further studies by Lotz et al. and others have demonstrated that mergers are insignificant in these regards. Thus there is no explanation for the 0.3z - 0.5z excess in big bang cosmology, i.e. the hypothesis of an expanding space-time metric has been falsified in its current form and needs heavy revision.
2. Inflation and its predictions are so tunable that they would fit almost any universe with a cosmic background radiation. As Popper would have argued, such theory is useless (pseudoscience) because it cannot be falsified nor properly differentiated from scientific theories (those that are not highly tunable due to the inclusion of ad-hoc components).
3. The angular scale problem. All discrete objects follow predictions of a static metric, i.e. 90% light diameter of galaxies, extent of gas in clusters, separation between brightest cluster galaxies, length of double radio lobes, ect. However, to save the big bang theory the community decided to discard all these results and rely on observations that are strongly model dependent (also affected by a plethora of anomalies/problems), i.e. BAO and the SZ-effect.
Thus if one ignores falsificationism under such situation you are left with pseudoscience; i.e. an over-reliance on confirmation rather than serious attempts at refutation. Unfortunately, some think that their own beliefs or theories dictate how nature works rather than the other way around. True science requires falsificationism, logical positivism and Occam’s razor.
About http://science.martinsewell.com/falsification.html
1. Falsification is hypothetical too - which is unproblematic once the search for certainty has been given up anyway. Discussed in LdF.
2. To talk about falsificability of statements is not meaningful, only theories as a whole can be falsified. Also discussed in LdF. As a consequence, not statements but whole theories are to be classified as falsifiable or not.
Usually even different theories are necessary to derive a particular prediction. If this happens, falsifications have to be evaluated very carefully (using, for example, other experiments to test only a few of the theories, or other measurement devices to avoid measurement errors) before making conclusions which of the involved theories is wrong. Logically unproblematic - even if it is problematic to identify the faulty part, there has to be one, and further localization is possible.
Given that the interpretation of particular experiments is problematic, and falsification open to criticism following Popper himself, particular historical cases of incorrectly interpreted observations are unproblematic for the logic behind this.
Ladyman: Of course, general theories should be falsifiable, existential statements verifiable. See LdF. Scientific principles are used in scientific theories, which have to be falsifiable, not the particular principles (same as above about falsifiability of statements). Scientific methodology is itself not science. But, based on critical rationalism (Poppers later philosophy) it can be criticized too.
O Hear: Repeatability is usually part of the theory itself, once it is a general theory. There may be implicit assumptions (background knowledge), but this poses no problem, because the background knowledge (even if taken as given in a particular experiment) is open to criticism and rejection too. And the quest for absolute certainty has been given up anyway.
There remains a single point- probabllity. Again, once a falsification can be questioned too, and the quest is not for absolute certainty, a "falsification with high probability" is unproblematic.
But I agree that Popper has made a big error in his considerations about probability, by rejecting Bayesianism and developing his own propensity interpretation (which is better in comparison with the frequency interpretation, but essentially only a minor correction).
The bad news is that most of the "criticism" is simply unprofessional: Once the "problem" is already handled in the original work (Logik der Forschung), and adequately handled, it seems these critics have not even read the original literature. Same for the quote "He asserted that if a statement is to be scientific rather than metaphysical it must be falsifiable" which is not an assertion made by Popper but a misintepretation.
Charles, your "relativity postulate" in this form does not allow to derive anything interesting. Almost every theory (starting with Newtonian theory) fulfills some variant of it. In almost every theory the laws do not change with time. So, almost every theory fulfills your "principles", thus, one cannot derive anything interesting from them.
There exists the possibility that physical laws change by themselves, we cannot avoid such a scenario. It makes sense: for example if we accept the big-bang cosmology, we had different laws at the beginning than now.
Falsification is not hypothetical, it just has limitations. For example, consider galaxies in cosmology; there are only a finite amount of variables in terms of evolution and number counts. Initially the faint blue galaxy problem was blamed on evolution, which was later falsified through more precise observations; i.e. stacking spectra of a larger sample. With recent constraints on major galaxy mergers, such galaxies on average undergo a single merger between 0z – 1.5z. At 0.3z – 0.5z where the problem exists, there are perhaps 20% of galaxies undergoing major mergers versus the local 5%; nowhere near the 3x-5x discrepancy.
Furthermore, angular scales and number counts in big bang cosmology are constrained by several factors. These all boil down to SNIa observations (standard candles) via luminosity distance versus redshift. This is only one of three basic properties in cosmological models, i.e. u versus z, angular diameter-redshift and the volume element. Predictions for the later two are constrained by SNIa and rate of space-time expansion in big bang cosmology. Thus to save the theory, there would need to be more ad-hoc additions that explain the large discrepancies at both low and high redshift. Some proposals have been a local hole that Earth is centered on (> 0.1z in radius), arbitrary normalization of luminosity functions, “disappearing galaxies”, ect. However, these problems will only become worse as telescope resolution increases and at some point the model will likely be abandoned. If a theory as a whole relies on a specific statement/prediction that is ruled out by multiple observations, would you not say it has been falsified in the current form?
Its not about being on one side or the other, but in the center. For certain cases falsification is not applicable (such as existence statements), other times it should be sought after. Ignoring all the problems with a theory and relying on highly tunable, model-dependent aspects cannot be good either. Thus a key indicator of pseudoscience is over-reliance on confirmation rather than rigorous attempts at refutation. There are many cases however where falsification is and has been applicable to cosmology, e.g. think about how dark energy and dark matter came about. Although these two examples may turn out to be non-existent in nature, the original big bang theory was falsified several times (but not to the current extent). Is the big bang theory repairable beyond unfalsifiable, ad hoc additions? I don't think so.
@Michael, Ilja, Charles, Demetrsi: thank you for your answers. Best wishes
“If there is evidence to verify/confirm a theory then why is that theory not capable also of falsification? That seems a non sequitur. And here it is unclear what is meant by ‘true science’ nor how this remark assists”
Because it is possible to make any framework or theory work with enough ad hoc or unfalsifiable (tunable) additions. True science is when you apply the scientific method with Occam’s razor, i.e. a mix of falsificationism, logical positivism and the simplest theory. For example, theories that have greater falsifiability are generally less tunable, based upon concrete science (proven foundations) and allow many direct experiments (few or no unknowns/uncertainties). Thus these types of theories are more favorable when they provide equivalent or greater theoretical accuracy. Positivism is a central piece of the scientific method, where the task is to verify hypothesis through empirical sciences; i.e. experimentation. Finally, Occam’s razor really boils down to accepting the simplest theory that is in agreement with all or the most observations. It is directly correlated with falsifiability due to the previously mentioned qualities, e.g. less tunable -> more falsifiable.
Clifford,
I think you are misinterpreting the point behind what I’m saying; I’ll give you a historical example. It was initially hypothesized the CMB would exist under a big bang scenario. This is an existence statement, which was falsifiable prior to direct observations. As observations got better over time however, there were many theoretical additions and new anomalies. These additions do not solve all of the problems and in the end make the theory unfalsifiable (see http://www.nature.com/news/big-bang-blunder-bursts-the-multiverse-bubble-1.15346 ). It became unfalsifiable because the theory is fine tunable in the sense that it reduces to a simple existence statement (i.e. that the CMB exists), which had already been proven. Your second example however is a falsified existence statement, which is not the same as a viable theory. On the other hand, you could have a statement such as “Santa will come down this chimney at some point” and spend your entire life waiting for verification.
I think the CMB example demonstrates exactly what I was saying about the correlation between simplicity (Occam’s razor) and falsifiability. Initially the theory or hypothesis was simple and based upon (more or less) solid foundations. As time progressed, the theory became more complex, ad hoc and unfalsifiable. Thus falsifiability has its limitations and one should proceed to Occam’s razor to differentiate between ad hoc versus genuine theories of equivalent theoretical success. There are always changes in science and at some point a mainstream theory may become obsolete. In the mean time, “true science” is when there are attempts at falsifying a theory rather than just making more ad hoc hypothesis to fix a plethora of anomalies (or perhaps seek alternatives).
And there is something between atoms in space, it’s called vacuum energy density. Another way to look at it is QFT, i.e. starting with the electromagnetic field. Imagine space itself behaving like a spring-mass system at the Planck-scale, then you have something that is physically real and deterministic.
@All contributors: thank you so much for all your answers.
@Dr. Charles Francis: it is very interesting that you do not agree with Hawking? Would you mind to explain a bit which arguments by Hawking that you do not agree? Or the whole of Hawking-Mlodinow's book? Thanks
Clifford,
That is simply not true with cosmology or astrophysics, where the concordance model has continuously had ad hoc amendments making most aspects unfalsifiable. Can dark energy, dark matter or inflation be falsified (three main pillars of big bang cosmology)? No, because even with u versus z predictions LCDM has the capability to fit any universe due to the abundance of tunable, non-classical parameters. Geology and meteorology on the other hand are based upon fundamental laws of physics that can be reproduced and directly verified. When meteorologists are unable to explain a weather pattern, do you think they sit there and blame it on dark energy/matter? No, they produce models based upon concrete science (facts) and verify their hypothesis. What I mean by facts is that we have a solid understanding of particle physics at these energy scales, i.e. reproducible by direct experimentation.
There are enough observations in cosmology to begin drastically limiting the amount of viable models; the problem is that the mainstream is not attempting to do so because it rules out the current big bang framework. So it isn’t about being able to falsify the big bang theory as a whole, but instead the lack of such attempt by its community. If your smart enough, you will find a way to falsify a complex theory based upon several observations. Is it always possible to falsify a hypothesis or theory? No, but it is the majority of the time IF they are scientific.
This is precisely why I went from working on a UFT/TOE to cosmology, because there is a lot more available in terms of observational data. For example, galaxies, clusters and other discrete objects can be constrained quite well. We know that SNIa offer standard candles, where observations also insist that fundamental constants must be relatively constant over cosmological time-scales. We know that the amount of galaxies can only vary through collisions or luminosity evolution. What we don’t know is the exact nature of the CMB or for example any of the ad hoc additions to big bang cosmology. We don’t know the exact origin of redshift, i.e. is it metric expansion or perhaps gravitational/Doppler redshift.
If you reject falsification and logical positivism, then what exactly is left?
The modern concordance model is called Lambda cold dark matter and the dynamics of the space-time metric is dependent upon these factors. Without them the theory wouldn't work at all. Galaxies and clusters for example wouldn't even be able to form without dark matter in big bang simulations. Thus I fail to see how your argument is valid post-1980.
There are better explanations for redshift beyond metric expansion, e.g. a global gravitational potential (see http://adsabs.harvard.edu/abs/2014AstRv...9c...4P ). In fact, that article goes to show just how falsifiable the big bang theory is when all observations are taken into consideration (rigorous attempt at refutation).
Clifford,
It is not that unique, but perhaps less common in the last few decades. I aim to avoid pseudoscience, although there’s nothing wrong with putting forth a hypothesis as long as it’s not being portrayed as fact prior to direct verification (agreement with all observables).
In regards to particle physics, the central goal is to determine the outcome of all particle experiments and formulate them into mathematical equations. At these energy scales, the physics have been verified to a very high degree of precision. Whether the standard model can achieve a “theory of everything” is irrelevant, as it only needs to reproduce various measurable quantities to serve its purpose. Thus with the fully understood particle physics behind weather and climate, the only limitations are grid resolution, initial conditions and time stepping in numerical models. However, the underlying assumptions in these models are experimentally verified; i.e. pressure, temperature, gas laws, quantum mechanics, ect. Of course there is a limit to the length something can accurately be predicted, but this constantly increases with better hardware (resolution) and better implementation of physics.
Plate tectonics are more complicated, but we fully understand the physics at these energy scales. It also suffers from poor spatial resolution for input data, but there are many methods to extract information that are becoming increasingly more advanced. If further dialogue is unnecessary, then why do you continue to defend your view?