See Book Mathematics Without Accidents
In software development, everything is digital. We motivate that software development and quantum mechanics are as difficult as they are because people have developed the "right" intuitions for mental processes to be consistent with classical (Newtonian) mechanics, using "continuity" because this "sounds" familiar -- but it does not. Nothing is continuous in Nature nor mind.
The Curry-Howard correspondence indicates that are is also no continuity in analysis, in mathematics.
The book will be ready in 6-12 months; meanwhile, you can read free with Kindle Unlimited, at Amazon, as Part 1: How and Why, at https://www.amazon.com/dp/B07Z93MX67/
---------------------------------------------
NOTE: As a reminder, it is easy to deal with fantasy and nonsense posters in this thread:
1. They talk against known science, such as quantum mechanics and special relativity.
2. They say they can calculate what no one can, in mathematics and physics.
3. They add one or more of their own links, and call it referencing, but trying to get clicks while hiding self or fringe group advertising and false news, and repeat copying their own links under different titles, questions, etc.
4. When asked to stay on topic, they argue, instead of stopping.
5. When asked to correct their wrong citations by the authors themselves, they do not and continue to offend copyright.
6. One recognizes them, also, by talking about other posters, or the author, not about the subject (ad hominem attack). Then, they redefine terms in an effort to control the discussion. We do not do that as a recommended practice in science. So, they are already off-topic.
If this happens, you can treat these messages as they are, personal ads, and skip them, reducing noise with known fantasy or nonsense posters.
To remove the (infinitesimals) and it's (mathematical consequences) is equivalent to remove the whole modern mathematics.
My answer is NO, infinitesimals cannot be eliminated from mathematics.
I have been aware of this situation. I never got too far though as to get rid of infinitesimals... However, in practical calculations, they really disappear. Long live to rational numbers!
Infinitesimals are necessary for the process of limit. It is a simpler way to write fractions of limits, like derivative (eg instant velocity), without the limit symbols.
However we always have in our mind that the limiting process is the primitive and infinitesimals are a helpful notation.
Essentially I agree: math is different from physics. But in physics not only rational numbers exist: how about \pi, infinitesimally close to its rational approximations?
"Infinitesimals even do not exist" - RIGHT! The same is valid wrt ALL math notions. But - mathematics exists, with its infinitezimals. Also the WHOLE mathematics is applicable jointly with its infinitezimals. Obviously, it is hard to understand everything in math (general books like Bourbaki were written for years by many!), but purposely omitting ANY part as unnecessary/supefluos/harming like it is proposed here is a silly idea oriented toward increasing a production of tellers stories about nature without any similarity to the truth. E.g. how E.G. will explain simple evolution equation for radiation decay, predator-prey models, simplest Markov processes or more comple like wave equations for elastic deformations, EM waves etc. etc. rigorous description of ANY physical phenomenon requires perfect understanding the calculus of infinitesimals. And it is not due to the suggested primacy of mathematics but to the primacy of required consistency of our mutual communication: math is the basic language for communicating information about natural phenomena and any trial toward neglecting and depreciating mathematics, especially by those who do NOT know the mathematics is an activity against correct developement of science.
This kind of arguments (against infinitesimals in Physics) overestimate the linear mapping, which is the essence of all "quantum" approaches. But, not all systems can be approximated by linear mappings, so we cannot abandon the limit process.
No. However, by my knowledge, in Nonstandard Analysis, it is introduced the notion of nonzero infinitesimal.
You use a hipercomplex no. e such that ee=0 instead.
Then say
f(x+ye)= f(x) +ey df/dx
exactly.
There are matrix representations of it.
No difference in concept than using i.
Is an ideal in the dual algebra.
Infinitesimals can be eliminated from calculus via the number form notion using infinitesimals as if they were numbers and actually are once the involved ”observer“ changes the reference level of the level-duo wherein he is involved.
In short, any infinitesimal that can be transformed into a number form (QuasiNull) can be thought as a number thru the said change (cf. Number form theory on RG).
Dear Jean, writing
>>Infinitesimals can be eliminated from calculus via the number form notion using infinitesimals as if they were numbers
Dear Joachim,
No. I actually mean a sequence of numbers converging towards 0 in accordance with an arithmetical law that relates the asymptotic limit 0 to a further now non-zero limit while making sure that the rapport of successive iteration steps of the said law yields infinity at their limit: that way, we make sure that the asymptotic limits do not (!) belong to the same reference level! That way, a further "level" is generated thru the same sequence of "infinitesimals" that is titled Ziellevel as a component of the same level-duo though.
Thus by "infinitesimals" building a number form, I understand a sequence of numbers tending to 0 at the reference level while generating to the other end of the same sequence a 2nd limit as an element of the Ziellevel.
That way, the level-notion replaces "infinitesimals" in a twice converging sequence by respective numbers. This scheme is proper to any number form.
Allow me to reformulate my 2nd sentence from above:
In short, "infinitesimals" that obey the law of a number form can be held as numbers at their respective limits within a level-duo.
I recommend my paper_1 on Number form© theory on RG.
All the Best. Jean
Number form theory defines enclosure-overcoming (!) mathematics while differentiating amid various ”infinite levels“: infinity can that way be sliced into multiple levels that are hidden (!) to any ”observer“ of a reference level.
Moreover, these ”levels“ are required to set up the ”defining act“ as postulated in Structure wave theory, i.e. a structural theory of Consciousness, partially available on RG as well...
From the view of Number form theory, ”infinitesimals“ are replaced by various Ziellevels that are differentiated thru matching exponents that appear as ordered (!) prime densities, described in detail in the papers of NFT.
Quantum jumps are enclosure-confined.
Enclosure overcoming mathematics, like number forms, can explain quantum jumps from the discrete iteration principle that generates number forms.
Nevertheless, we may think of ”infinitesimals“ as of the converging sequence defining infinitesimal number forms.
Cauchy didn‘t know number forms: his mathematics comes fully enclosure-confined (see enclosure issue expressed in a coming theorem in the context of Number form theory).
Bi-polar infinitesimal number forms (e.g. basic QuasiNull) may be thought as ”infinitesimals“: their construction occurs in a tangible mathematical way (cf. Introduction to Number form theory paper_1).
The issue linked to ”infinitesimals“ is solved thru hypothetical ”observers“ placed at a reference- or Ziellevel while the number form itself is not (!) linked to either level.
Infinitesimals belong to the mathematics and they are at their place there. From this point of view, they are even less 'archaic' than equilateral triangles or circles. Physics uses mathematics as a tool to describe the world, everyday observations, not always the best tool. I agree with Ed that what we see is often the result of some kind of 'jump'. Radioactive decay is an excellent example, yet we describe the number (integer number!) of still intact nuclei by smooth exponential function. We simply lack good mathematical tools to describe all those jumps. Heaviside step function alone, more than 100 years old, doesn't help much. Walsh functions, together with Fast Walsh Transform, remain largely unknown to physicists and thus unused. Should we then abandon 'smooth' mathematics for 'jumps' in elementary courses? Well, this not either-or situation, we need both, however, the 'jumps' still remain mostly unexplored by mathematicians.
To remove the (infinitesimals) and it's (mathematical consequences) is equivalent to remove the whole modern mathematics.
My answer is NO, infinitesimals cannot be eliminated from mathematics.
I do not understand the point of this discussion: the phrase "removing infinitesimals" seems here to be interpreted as avoiding the use of irrational numbers and I am unclear as to why one would want to. Much of this seems to be a confusion of the notion of "rate" in predicting what will actually happen in the real world. Even with rationals, saying that "the density of cars per mile is 37.1"
does not mean that each mile of road contains 37 cars plus an extra wheel. This is necessarily to be intended to be more accurate than just counting 37 cars for each mile, but still to be understood as an approximation. The value of the "real number" system lies in the ease with which one can work out the relation of various such approximations (as what would be a corresponding description of the density of cars per km, etc.) Our interpretations depend on this idea of approximations at various scales with Mathematics (e.g., Calculus or the theory of Differential Equations) showing how to compute the relations between these.
My point was that the use of irrational numbers is a matter of convenience, important for a variety of physical computations so we would not want to eliminate this use. The example of cars per mile was only to note that we can count cars, but measure lengths -- there is no physical magic in choosing the particular distance we have chosen in defining a "mile" as the unit of length and one needs a unit of length to enable computations of "density of cars" make sense. This is an inevitable consequence of asking for the convenience of scaling with measured values.
Consider again the measurement of "length" in the context of the Pythagorean Theorem. Our measurement reduces to counting concatenized units so saying
the length of a side of a square is 10cm means that we can lay off a succession of centimeter lengths and count up to 10. But now: What about the diagonal? When we try the same procedure we count up to 14 and still have some length left over --- yet 15 will not fit. We can switch to shorter units and our procedure will be exact if these are fractional parts of the cm units but using mm units still does not lead to an exact match: at any level of desired accuracy there will be some error.
Does the diagonal have a "length" at all? "14"cm is wrong, although a "good enough" approximation for many purposes (depending on why one wants to know and what one will do with the answer); "14.14"cm (again counting, after subdividing the 1cm units into hundredths) is a better approximation, but still not quite right and still might not be good enough for the intended purpose. Similarly, saying the "correct physical answer" earlier is "about 37 cars (per mile)" involves the unwarranted assumption that this level of precision is (known to be) appropriate for whatever application we might later have in mind. The use of the value 10*sqrt{2}cm enables us to separate in our mind the constraints involved in the relation between our measurement process and the precision requirements.
Clearly no positive real number can be the ideal infinitesimal.
That is why I suggest the e, ee=0 hypercomplex.
This is to substitute an infinitesimal, not to eliminate the concept.
It cuts off the Taylor expansion.
No, because of the importance and useful applications of infinitesimals in such as the law of continuity: what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa; and the transcendental law of homogeneity that specifies procedures for replacing expressions involving inassignable quantities, by expressions involving only assignable ones.
Number form theory resolves that issue (see the real number form QuasiNull) while adding further reference levels that are mutually hidden one from the other though:
The point is that measuring structures to a precision better than the Planck length without limit to the lower bound while staying better than zero allocates metrics that do no longer characterize the reference level but yield an unstable metric type relating a reference level to a fitting Ziellevel, both forming a level-duo wherein the inflection point of the involved number form curve separates both.
“Intuitionism, school of mathematical thought introduced by the 20th-century Dutch mathematician L.E.J. Brouwer that contends the primary objects of mathematical discourse are mental constructions governed by self-evident laws. Intuitionists have challenged many of the oldest principles of mathematics as being nonconstructive and hence mathematically meaningless.”
Number form theory helps to define explicitly (!) the aforesaid “mental constructions” thru the relief of structure waves by a defining observer, actually a structural oscillator.
Thus, number form concepts cannot be reduced to “Intuitionism”. Sorry.
Moreover, Number form theory doesn’t reject any of the established mathematical principles.
Jean
It just sounds like you are trying to sell us a very small
positive real number, that can be lowered to any desirable extent. That is the concept we always had.
If not explain better.
ED
I get that
Brouwer rejects the law of the excluded middle, but quantum
mechanics has nothing to say about this
In any case maybe QM suggests we never have to measure exactly, in which case you do not need the infinitesimal in practical matters.
I thought this is intuitionism.
I agree we cannot do without some form of infinitesimal in mathematics.
Finite jumps versus infinitesimals are the easy part!
I understand the reluctance of instructors to include a rigorous treatment of infinitesimals in a Calculus course. Most that I know do use what is effectively the intuition of infinitesimals because it works: most students will be applying these ideas in situations with enough continuity so this causes no difficulty, the situations where one must really be careful don't yet come up for them as they might, for example, if they would be trying to model (whether analytically or computationally) the dynamic behavior of a foam or the use of nonsmooth analysis to work with sliding modes in control.
No reasonable Physicist would reject the use of "point masses" for approximation because "they don't exist' or would insist that continuum mechanics could only be understood at the atomic scale. Quantum mechanics is as difficult as it is because we have not developed the right intuitions for our mental processes to be consistent with reality so we use classical (Newtonian) mechanics because this remains familiar.
It is difficult to exclude infinitesimals from Calculus. Why should we do it? Both Leibniz and Newton were using them way before the Cantor revolution in the second half of the XIX century. The purpose of Calculus has been manifold: on one hand - it encompasses the ideal entities, on the other hand - it helps to explain physical paradigms in no-nonse terms.
The physical world is a discrete one; including time (whatever time means). The realm of Calculus is continuous: as continuous as one can imagine. Rejecting the infinitesimals is denying a great intellectual effort people like Dedekind, Kantor and a couple more who made an effort to lay the ground for rigorous understanding of Calculus. To put it simple: imagine that you have a number line. It is as smooth as it gets, you may use an electron microscope and under such a great magnification, it is still very smooth; no gaps. Why? Because we have the Completeness axiom to our disposal. Why do we need it? - For example, to make sure that two overlapping circles intersect. Does anything like this happen in the physical world? No, and it is OK. Various tools/methods serve various purposes.
The physicists have great ideas, but frequently it is difficult to find a precise mathematical way to justify them. Case in point: the distribution theory which originally was created by Sobolev, Dieudonne, and L. Schwartz, had initially one purpose: to show how Dirac's delta and Heaviside function operate in the world of smooth functions.
Definitely, on a bigger scale, one cannot model physical world using the Euclidean geometry, but using hyperbolic, one can (somewhat). Such a geometry has even its own version of the Pythagorean theorem!:)
There are a lot of philosophical ramifications related to infinitesimals and I will not delve into. An old rule, The Ockham Razor, says that one should not multiply entities beyond the need.
Ed Gerck. In your 12-point manifesto, there are certain gaps and/or statements that lack precision, e.g., regarding Cauchy (part 5). Indeed, he was a highly prolific mathematician and was the first who proposed a rigorous definition of a limit, but he was not alone in creation of analysis (or if you wish). One needs to bear in mind, e.g., Bernard Bolzano, who several dozen of years before K. Weierstrass constructed a nowhere differentiable function on an interval (which happens to be also non-monotone on any sub-interval in its domain). To work with such precise artifacts one really needs infinitesimals. What would you do with complex analysis or differential equations without infinitesimals? How about Nonstandard Analysis?
Luckily, mathematics, or rather its particular theories, are governed by axiomatic systems (consistent, albeit not always complete logical system), so it is almost no room for speculations. Infinitesimals form just one of indispensable tools.
Actually, infinitesimals CAN be constructed -- e.g., look up "non-standard analysis" in Wikipedia -- but that is not the point here. I think the point is whether "ideal elements" (like point masses or rigid bodies or ... or infinitesimals) which both simplify our computations and appeal to our intuition with familiar settings should be "eliminated". I would certainly not claim that these ideal elements are "needed" (whatever that means) but what would be the advantage?
Not all Juan, any (infinitesimal) number form owns a pair of asymptotic limits that are numbers when seen from the reference- or Ziellevel in the level-duo while it is neither a number nor an asymptotic limit when seen outside of the level-duo: Then and only then that construct is legitimately titled number form (!) that must possibly be normalized.
I recommend my first paper on Number form theory (introduction) on RG.
To the ”dual nature of light” corresponds a dual nature regarding number forms:
They can be thought as a two-fold asymptotic continuous function or, according to the mathematical context, as an infinite sequence of discrete terms.
They fulfill all the required mathematical conditions to span the said level-duo while overcoming previous enclosure-confined mathematics.
@Ed Gerk Because the original question has been somewhat diverted by the noise of the usual AM (anti-mathematics) chapel, I feel obliged to clarify my position. My first answer had been a simple « no ». But to remove any ambiguity, let me stress that I take the term « infinitesimal » in its naive sense . To cite Wikipedia : Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals.
1). Thus the conceptual foundations of infinitesimal calculus include the notions of functions, limits, infinite sequences, infinite series and continuity. In no way did I intend to go into such discussions as the Newton-Leibniz controversy about the genuine meaning of the infinitesimal symbols dx and dy separately (differentials), and dy/dx as a quotient (derivative). With the utter self-confidence of a non mathematician arbitrating in mathematics, E.G. pronounces his sentence : « Leibniz was right that dy/dx exists and can be used as a fraction, opposite to Newton ». This is only one of the many fibs which he delivers with his admirable aplomb. It took me a certain time to collect them . Here is a partial anthology. In mathematics : Calculus itself cannot be based anymore on mathematics and its artificial, human-made, volitional "rules" - The phlogiston (a substance supposed by 18th-century chemists to exist in all combustible bodies, and to be released in combustion) can exist [in mathematics] - The reality that we can see and measure is limited to finite rationals - Irrationals, like pi, cannot be measured in physics, hence they have no proof of reality existence - Imagination itself is not infinite, therefore not continuous - Infinitesimals are not an approximation, they don't even exist in physics - There are NO infinitesimals in Nature, hence the notion of limits has to be different too, and the notion of integral and differential calculus - Quantum rules are measured in Nature, and are modeled in mathematics as finite rationals - The reality that we can see and measure is limited to finite rationals - Nothing is continuous in Nature - The quantum is the negation itself of limits with infinitesimals - We do not live in an Euclidean plane, the Pythagoras theorem is not valid, the hipothenuse is NEVER an irrational - Many mathematical results of Cauchy, like the very existence of "infinitesimals" and "complex numbers," are considered non-physical today - Imagination itself is not infinite, therefore not continuous - If infinitesimals cannot be constructed ... then they are not needed - Etc.
In physics : Nature is not continuous, but quantal. Nature jumps. By the introduction of limits, people sill tried to capture the illusion that Nature is continuous - It is time that mathematics accept that Nature is not continuous, nothing can be thought coherently as infinitesimals - The quantum is the negation itself of limits with infinitesimals - The "law of the excluded middle" is not satisfied in quantum mechanics - Below even quarks and leptons we have quantum waves. As we go smaller and smaller, we find no particles and no classical waves -- just quantum waves - Any competent physicist today (…) cannot use Newton's equations without risking a serious error, even at low speeds - Etc.
Note that in this list of papal bulls, most of the time the required scientific definitions (Nature, measure, reality…) are missing. It would be impossible here to refute point by point all these vagarial ukases. The main reason is lack of time and space (although I wonder how one could exceed the sum of all E.G.’s contributions !) Allow me to pick up one and only one item in each heading and give it a thorough answer. In the mathematical column, just take the first E.G. verdict on dy/dx which I pointed out. In any textbook on differential calculus, the general definition (and this generality is required in view of further math. development or physical application) of the derivative of a function goes on as follows : start from an application f from U to V , where U and V are two open subsets of resp. two normed vector spaces E and F ; the function f is called derivable at a point a iff there is a continuous linear application f '(a) from E to F s.t. the quotient of the norms of f(x) - f(a) - f '(a).(x - a) and of (x - a) tends to 0 when the norm of (x - a) tends to 0. If f is derivable at any point in U , this defines a derived function f ' from U to L(E,F), where L(E,F) is the vector space of continuous linear functions from E to F . This may seem complicated, but it’s actually the only natural way to define the derivative of a multivariable function in view of further math. development or physical application. If (and only if) E and F both have dimension 1, this is coherent with Leibniz interpretation of dy/dx, but it’s a coincidence. Because, whereas we can define higher derivatives, d n y/dxn can no longer have any sense other than notational. For instance, when it exists, f "(a) can be viewed canonically as a continuous bilinear application from E x E to F.
«There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy ».
Many remarks are in order : (i) The normed vector spaces E and F are defined over R or C ; (ii) If you don’t know what open subsets are, just think of E and F themselves. But note that the notion of open sets allows to deal with continuity in a much more concise way than the epsilon, delta of Cauchy’s language ; (iii) If E.G. refutes the existence of real/complex numbers, or the notion of limit, he’s stuck right at the start. Obstinately clinging to Q , he could still define the norm of an element of Q as being its absolute value and go on. But again, there are more things, Horatio… To be of some usefulness, an absolute value must be compatible with the operations in Q (in an obvious sense). It happens that under those restrictions, there are only two types of absolute values : the one that you all know, called archimedean ; and the ones that perhaps you don’t know, called p-adic, where p is any prime. Mathematically, R is constructed from Q by an operation called completion « à la Cauchy ». The same completion process, replacing archimedean by p-adic, gives the field Qp of p-adic numbers. The point is that the open subsets of R and Qp (and also in distinct Qp 's are utterly different, even antagonistic. In short, working in Q with archimedean approximations to R is totally arbitrary. But let us accept that choice. Then the question is raised, how to represent the rational numbers ? Using a numeration system needs choosing a basis. Granted such a choice, it still remains to decide of a degree of approximation (*), which cannot be fixed but depends on the operations you intend to perform. Admit that it’s completely silly to go through such a chore when you need e.g. to deal with (sqrt 2)2 . Regarding Ockham’s razor, you can come back ! And here we speak only with numerical results. E.G. himself acknowledges the convenience of working theoretically with limits and complex numbers, but he does not recognize their existence. He even appeals to George Berkeley, who attacked the foundations and principles of calculus and, in particular, the notion of fluxion (infinitesimal change, derivative) introduced by Newton and Leibniz, for which he coined the phrase "ghosts of departed quantities". An overweening posture for a philosopher who professed that "sensible things are those only which are immediately perceived by sense", but who, as a bishop, must have believed (?) in a God whose existence was certainly not a « sensible thing ».
It would be somewhat unfair from my side to go on discussing mathematics with non experts. But before passing to physics, let me ask all the true believers of the AM chapel, in the thread of our dicussion, to lift for me the paradoxes of Zeno of Elea : Achilles never catching up with the tortoise, the arrow never reaching its target.
(*) NB : The Thue-Siegel-Roth asserts that irrational algebraic numbers may not have too many « good » approximations by rational numbers. More precisely, given e > 0 and an irrational algebraic number a, the inequality absolute value of (a - p/q) < 1/q2+e can have only a finite number of solutions in coprime integers p,q. (End of first part)
"On "complex numbers" -- they have no physical existence." - as Ed Gerck said 14 hours ago and many many times before; don't you undertand?
"Please read the book excerpt #25." - otherwise, you will NEVER understand this!
"And the real numbers exist, since they are real" - if you don't know this, then YOU CANNOT take part in any scientific discussion related to either math or phys, the more - to mathematical applications to physics.
Only The Book for true believers of the AM Chapel will tell you really the real truth. Read it continuously during quantized time without any doubt. All doubting readers are excluded from the right to have right, right?
Without infinitesimals CALCULUS is impossible,may be other science...
Dear Ed Gerck
What do you want to say? Different axioms provide a different independent mathematical model. Each system (model) is consistent by itself. Regardless of the continuum property of the real numbers, we need to study and describe (motion of particles) the rate of change (differentiation), their propagation (graphs and traces) areas, and volumes (integration). Modern mathematics show excellent efficiency to solve almost all problems in all branches of sciences. The dynamical system and control theory is an excellent example, that shows the greatness of modern mathematics
(using infinitesimals ). Do you have any alternatives to do so?
Dear Ed Gerck
You said
< mathematical inconsistencies pointed out first by Russell >
Some paradoxes in set theory will never mean mathematical inconsistency.
I understand that there are many philosophical schools in mathematics.
Let's check the efficiency of ( the type theory) to solve trivial problems.
Can one determine the length of the circle x2 + y2 = 1? using what you called the
(type theory) or the calculus of constructions (CoC) that invented by Thierry Coquand.
(Second part)
2) Now let us discuss physics. Again, I concentrate on one single point, which is E.G.' s kind of "magical thinking" about the quantic (sorry, quantal) world. Now that the author has deleted his previous answers, I feel obliged to requote his mantra (which I fortunately had copied) : Nature is not continuous, but quantal. Nature jumps. By the introduction of limits, people still tried to capture the illusion that Nature is continuous - It is time that mathematics accept that Nature is not continuous, nothing can be thought coherently as infinitesimals - The quantum is the negation itself of limits with infinitesimals - The "law of the excluded middle" is not satisfied in quantum mechanics - Below even quarks and leptons we have quantum waves. As we go smaller and smaller, we find no particles and no classical waves -- just quantum waves . Etc. Unfortunately, in a discussion between people who painstakingly appeal to facts and studies, and people who just toss around big undefined words, the odds are clearly against the first category. But I accept this handicap, being unable to imagine how one could debate on quanta without ever saying a word on quantum theories (QT). I’ll keep my recap a minima, referring to a reprint below (*) to fill in the details.
. Great expectations
Let us jump directly to the turn of the 19th-20th centuries, at a time when physical sciences were giving the impression that their unification was at hand. Three majestic « explain-all » constructions dominated the landscape of physics : rational mechanics, invented by Newton (1687) and perfected by Lagrange (1788) and Hamilton (1833) into analytical mechanics; Maxwell’s electrodynamics (1864), the synthesis of all wavy phenomena; and theoretical thermodynamics, initialized by Fourier (1811) and finalized by Maxwell and Boltzmann (around 1870) under the form of statistical thermodynamics. Establishing a picture of the landscape in 1900, Lord Kelvin is willing to concede that there were still « two gentle clouds in the serene sky », but he has no doubt on the final issue. Alas, contrary to his expectations, the gentle clouds turned into hurricanes, so devastating that the whole edifice of physics could be saved only by adopting new paradigms in order to escape from paradoxes. Here I stress a point : a paradox is an apparent contradiction. If it can be lifted, this means that the ancient paradigm is not destroyed, it can be embedded in a new paradigm of which it becomes an approximation. Science in particular has always been evolving in this way, it does not « jump », E.G.
So the beginning of the 20th century witnessed two scientific revolutions, the relativistic and the quantic paradigms :
-. I guess that everybody, even the common layman, is at least vaguely aware of the 1887 Michelson-Morley experiment, probably the most famous « negative experiment » in the history of science. Extremely precise interferometric measures having shown not the slightest variation of the speed of light depending on the movement of the Earth, the newtonian paradigm had to be abandoned in favour of the relativistic paradigm (*). In spite of all its confirmed predictions and innumerable daily applications - the Bomb ( !), nuclear energy, the GPS…- Einstein’s Relativity continues to be the target of raucous attacks, especially from the NM chapel, I wonder why.
-. But our main subject here is QT. As for SR, a paradox needed to be lifted, the so called « UV catastrophe » (*), more precisely the non concordance between experimental results and theoretical predictions concerning the electromagnetic radiation of a « black body », i.e. a body which is in a complete thermodynamical equilibrium with its surroundings. The classical theory states that the characteristics of this radiation should depend solely on one parameter, the temperature of the body. The problem is then, for a given temperature T, to describe the spectral repartition of energy, viz. the variations of the volumetric density of energy in function of the frequency. Experiments show a bell-shaped dissymetric curve, more or less flattened according to T. Classical thermodynamics give a good approximation at low frequencies (the law of Rayleigh-Jeans), but at high frequencies, predict absurdly an infinite energy. As usual, the removal of the paradox had to come from the introduction of a new paradigm. What follows is not a digression, but an obligatory prologue to QT.
. Statistical thermodynamics
Actually, thanks to the atomic hypothesis, a small half of the task had already been performed by the statistical thermodynamics of Maxwell-Boltzmann (*). In total rupture with the absolute determinism of the time, the M-B. theory appeals to statistical methods to connect the concepts of mechanics (mass, force) to those of thermodynamics (heat, temperature, pressure), thus allowing to recover and improve the two fundamental principles of Carnot (1824). The first principle, that of the conservation of energy, is the macroscopic expression of the microscopic (mechanical) laws governing elastic collisions between molecules. The second principle, reformulated by Clausius (1850), is the interpretation in evolutionary terms of the entropy function which measures the « state of chaos » of a system. The central result of statistical thermodynamics is Boltzmann’s equation, S = k.logW, where S is the entropy ; k denotes the Boltzmann constant, which is a factor of proportionality between the average kinetic energy of the molecules and the temperature of the system viewed as a measure of its molecular agitation ; and finally W is the number of microscopic « configurations » which produce a given macrospic state (these configurations are a very abstract notion which we’ll encounter again later in quantum theory, QT for short). At the dawn of the 20th century, Gibbs (1902), then Einstein (1905) will put a final touch to statistical thermodynamics (*) by introducing probabilistic (as opposed to statistical) concepts to get access to the microspic states (as opposed to the the macroscopic state) of a physical system. To elaborate his theory of the Brownian motion, Einstein links the probability of occurrence of a micro-state of a system at equilibrium to the average time that this system spends in this micro-state during a period of time tending to infinity. All these pioneering concepts will reappear later in QFT under more sophisticated forms.
. Quantic revolution
Planck’s new paradigm (*) is contained in his 1900 law, the correct expression of the radiation of the black body condensed in a striking formula, E=h.Nu , which puts together three universal constants : the speed of light c (granted Einstein’s later but not less famous formula E=mc^2 ), the Boltzmann factor k, and the new constant h, and is considered as the birth act of all the diverse QT. The birth name comes from "quantum" , which means in latin « elementary quantity ». And so will be called « quantic » any phenomenon, any theory where – directly or indirectly – will appear Planck’s constant h. In his reasoning, in order to make use of statistical thermodynamics, Planck postulated that all the exchanges of energy between the black body and its electromagnetic radiation proceed in a discontinuous manner, by entire multiples of a quantum of action. The idea of a « discrete » (this is the exact mathematical term : Q is a discrete topological space, which will make E.G. happy) exchange of energy was revolutionary, contrary at the same time to intuition and to the scientific presuppositions of the time. Einstein himself took it up again in 1909 to explain the photo-electric effect (*), which occurs only above a certain critical threshold of frequence, and above this limit the energy of the electrons which are produced does not depend on the intensity of the radiation. This phenomenon cannot be explained when sticking to the wavy nature of the radiation. Even a layman could feel at this stage that the quantification (or « dicretisation ») of the concept of energy was a first step towards the unification of mechanics and electromagnetism as evoked above.
But at this stage - and at any later stage, see below - in no way could quantification vouch for E.G.’s magical mumbo-jumbo on the « jumpy » character of Nature. First, it would be wise to avoid the word Nature because of its « catch-all » character (doesn’t it include life, which is not the subject of physics ?), and replace it by the vocable Universe, commonly used to designate the set of all observable physical phenomena. Second, in this (admittedly still vague) setting, it remains to define « (dis)continuity », a notion which could reserve paradoxical surprises (even for E.G.). A function on a topological space can be continuous here and discontinuous there, why should it be (dis)continuous everywhere ? But we are no longer in our mathematical phase, and I accept to discuss on an intuitive physical basis. This will however be a long winding road. And all the more arduous because of the extreme abstraction and sophistication of the ultimate forms of QT, when the concepts involved no longer have analogues in classical physics, and the mathematical language alone allows to define them correctly. The intuitive language keeps its power of suggestion, but the ransom to pay is imprecision, and sometimes confusion (follow my eyes) generated by false analogies.
. Wave Mechanics
Planck’s quantification had installed the idea of a dual nature of light, both wavy and corpuscular. An idea which was boldly extended in 1924 by Louis de Broglie to any particle of the microscopic world in his theory called Wave Mechanics (WM for short)(*). De Broglie’s approach is at the same time wavy and relativistic. Starting from the assumption that Planck’s formula implies the existence of an « intrinsic frequency » attached to the particle under study, and without bothering about the hypothetical pulsatory phenomenon, he succeeds to link the wave length lambda to the relativistic impulsion p (= quantity of movement) in his equation lambda. p = h . The duality wave-corpuscle was confirmed by the Davisson-Germer diffraction experiment of 1927, but well before that, the quantic refoundation of mechanics had been launched. Conceptually, WM could not be satisfying, because intuitively, a particle is « here or there », whereas a wave is « everywhere ». The ambition of Relativistic Quantum Theory (RQT) was to build a general framework to describe the behaviour of particles at the atomic scale. Independently from any reference to a material corpuscle, the word particle in RQT designates a quantum of energy, which can present itself under a mass form (it is then a material particle) or an energetic form (it is then a massless particle). But Einstein’s equivalence is not an identity, because if it were, matter and energy would transform one into the other in an anarchic way – which is not the case. It is precisely the laws of transformation matter/energy that the diverse RQT will try to establish. At the end of his life, Planck is believed to have pronounced this disillusioned assessement : « For me who devoted my existence to the most rigourous science, the study of matter, this is all I can tell you of the results of my research : there is no matter ! » (1947).
. The unreasonable RQT
RQT is distinguished from Wave Theory not only by its content, but also by its theoretical formalism, inspired from analytical mechanics and developped around 1930 by the « School of Copenhagen » (Bohr, Heisenberg, Dirac…) The presentation by Dirac of the principles of RQT (*) rests on the mathematical theory of Hilbert spaces (whose vectors are called states) and hermitian operators on these spaces (called observables). I’ll say nothing about them, except to wonder how they could be explained/understood without the complex numbers (remember, they don’t exist according to E.G.). Among the 6 principles of RQT, the 4th one is of a probabitistic nature, and under the name of Heisenberg’s uncertainty principle, has probably given birth to the most impressive array of misinterpretations in popularization texts. And in this quotation from E.G. : Reality is observer-dependent, in QM and life. Starting with the Heisenberg principle, observer and experiment cannot be dissociated. There is no objectivity in QM therefore (objectivity would be observer-independent, contradicting QM). The truth is that Heisenberg’s principle doesn’t bear on any subjectivity of measure, but on a fundamental impossibilitity due the Planck quantum. Mathematically, this is explained by the « non commutation » of two operators : for example, if x is a position and p an impulsion, then the « commutator » [x,p] = ih (a consequence of Schrödinger’s equation). And physically this implies that any gain of precision on the measure of one of the parameters must be compensated by a loss on the other. Niels Bohr used to say : « Whoever is not shocked by QT does not understand it ». The « unreasonable efficiency » of the quantic principles, through the many successive variations of RQT bearing such exotic names as Quantum Electrodynamics (QED) or Quantum Chronodynamics (QCD), can be judged at the incredible list of practical applications which have taken such a place in our daily life that we tend to forget their extraordinary origins : the laser, the scanner, the scanning tunneling microscope, the USB drive…
. From a zoo to a field
In spite of its successes, RQM suffered a major deficiency : it could only describe systems with a fixed number of non interacting particles ; in particular the process of creation / annihilation of particles was outside its scope. One annoying consequence was the proliferation of a menagerie of more than 400 « elementary » particles about which Enrico Fermi used to joke : « If I could memorize all their names, I would be a zoologist, not a physicist ! » Order came only with the adoption of Quantum Field Theory (QFT), another new paradigm. Not so new in fact, since before the quantum field was the space of configurations of Lagrange (1788) and Hamilton (1854), a mathematical reformulation of the Newtonian mechanics aiming to eliminate the (disputable) notions of force and material point from the study of a dynamical system (*). Such a system S is defined as a set of n points (where n can be infinite, as in the case of a fluid) equipped with « dynamical parameters » such as « masses » and « quantities of movement » (note that these are purely abstract functions, called « generalized coordinates »). The idea is to introduce an abstract space of configurations with 3n degrees of liberty, endowed with a dynamical operator, which is simply a continuous (yes, E.G.) function supposed to describe the states of S and their evolution, using the « symmetries » of S. One first example is the Lagrangian field L(p_i, p'_i, t) , which is, in classical mechanics, simply the difference between the kinetic and potential energies of S. The laws governing the dynamics of S are deduced from a principle of minimal action of the field. From this one can deduce immediately the Euler-Lagrange equations of classical mechanics. Hamiltonian mechanics are an improvement of Lagragian mechanics, introducing the Hamiltonian field H(p_i, q_i, t) , where q_i is the partial derivative of L w.r.t. p_i' , called a moment. In classical mechanics, the Hamiltonian is simply the sum of the kinetic and potential energies of the system. The advantage of the Hamiltonian is to allow to replace the second order Euler-Lagrange equations by the first order Hamilton-Jacobi equations.
As abstract as it may seem, the Lagrange approach is not just a gratuitous formalism. On the contrary, it presents a systematical methodology to obtain the equations of evolution of diverse systems, of course under an adequate choice of parameters. An emblematic example is Maxwell’s electromagnetic field (1854) (*), which Einstein and Infeld view as a major conceptual innovation : « The whole space is the scene of Maxwell’s laws and not, as in mechanics, only the points where matter and electrical charges are present (…) The field here and now depends on the field at a point which is immediately near by and at a time which is immediately earlier. The equations allow us to predict what will happen a little further in space and a little later in time (…) The result of such a deduction is the electromagnetic wave ». Note that the framework here is relativistic : in a space-time continuum, physical interactions take place between matter and energy, which can be discontinuous at the (sub)atomic scale where the duality wave-corpuscule manifests itself. Curiously, Einstein was particularly focalized on the contradiction, according to him, between « material points whose dynamics are ruled by ordinary differential equations and fields whose mechanics are ruled by partial derivatives equations ». Anyway, in the setting of RQT, the last step to unification - outside gravitation - is clearly located in the process of creation / annihilation of particles.
. QFT and gauge theories
It is impossible to give a digest of such a sophisticated theory as Quantum Field Theory (although an attempt was made in (*)!) To give a quick idea, one can say that, just as rational mechanics were quantified by RQT, analytical mechanics are quantified by QFT. « In its mature form, the idea of QFT is that the quantum fields are the basic ingredients of the Universe, and that particles are only packets of energy and moment of these fields […] QFT thus leads to a more unified view of Nature than the old dual interpretation at the same time in terms of particles and fields [as in RQT] » (S. Weinberg, 1979). What must be stressed is the supplementary step towards the elimination of the concept of substance in favour of the concept of field . The field is no longer a mere mathematical gadget, it becomes a fundamental actor in the physics of particles : the relativistic mass-energy particle had already been eclipsed by the quantum of action of RQT; the latter now steps aside to leave room to the quantum of interaction between quantum fields. We are far, very far away from the first naive images of E.G.’s representation of jumpy Nature and epileptic quanta.
(*) With some more detail, a first step is to interpret a quantum field as a collection of state vectors and observables as above, a second to introduce operators on the field. Not surpringly, the two main operators here are that of annihilation and creation. As e.g. for the Lagrangian, they are constructed using a small number of « symmetries » and deduce from them « quantic invariants », i.e. parameters which are conserved during the evolution of the system. The vague term of « symmetry » needs not be taken in a strict geometrical sense (central, axial symmetries…), but in the larger intuitive sense of conservation of such quantities as of impulsion, of mass/energy… But to transform such intuitions into mathematically manipulable concepts is the key. (*) The famous Erlangen program of Hilbert and Klein (1872) proposed to lay the mathematical foundations of geometry on actions of groups on sets and the invariants under these actions. In the spirit of this program, a student of Hilbert and Klein, Emmy Noether, proved in 1905 a theorem on differential invariants in variation calculus which is considered nowadays as one of the most important ever discovered in the orientation of modern physics : « To any infinitesimal transformation which leaves unchanged the integral of action, corresponds a physical quantity which is conserved ». I think this goes against E.G.’s view of the application of math. to physics : math. do not overload physics, on the contrary they simplify them by extracting hidden principles. This appears with absolute clarity in the development of QFT, where the architecture of the notorious standard model is commanded by :
QED Group U(1) Electromagnetic force
Unified electroweak SU(2)xU(1) Elm. and weak forces
QCD SU(3) Strong force
Standard model SU(3)xSU(2)xU(1) 3 unified forces
About the groups : all the groups have complex (non existing, remember) coefficients. Then U(n) denotes the group of unitary operators which preserve the structure of an n-dimensional Hilbert space, SU(n) the subgroup of unitary operators of determinant 1.
About their actions : U(1) is the natural gauge group of classical electrodynamics ; it remains the gauge group of QED.
About the forces : Elm. Speaks for itself. The other ones (weak and strong) are the interaction forces inside the atom.
(*) I append a file on QT to fill in details, not for publicity. All the (*) in the text refer to it. Note that it is only a semi-popularization article, quite apart from all my truly scientific publications, to be found only on official journals with an editorial board, anonymous referees, and subsequent reviews in Zentralblatt and Math Reviews.
First you have to admit there is a real problem
The smallest of the posistive real numbers does not exist.
So in this way the infinitesimal is some kind of ideal nonexistant concept you get close to but never achieve.
The hipercomplex number e does do the trick, but is not a real
number. (ee=0)
Jean
You are using terms that I simply cannot follow, and I
have been exposed to a lot!
EG: Words show anything.
You didn't reply to my request yet.
Let me ask a more simple exercise: Solve x2 = 2.
Don't hide behind rationals that have measure zero on the real line!!
Dear Thong Nguyen Quang Do
I want to thank you for the professional (original) answer.
Indeed it can be served an excellent introduction to an article to show the developments of mathematics through the last three hundred years.
EG: You said
( Meanwhile use a calculator -- you will be using only finite rationals, possibly only integers. )
Frankly, this is a wrong answer. Different calculators or machines provide different answers. This because of the different algorithms used. (Truncated power series).
Mathematically, this OPERATION is not well defined and not accepted.
Some people use calculators to find approximations for solutions that are not exact. The well-defined operations will give a unique answer.
*Thanks to TNQD and IK, who cleverly showed us by absurdum, that we cannot expect a mathematician to directly understand quantum mechanics (QM) first
Thank you to E.G. to put us (IK and myself) among the founders of RQT: Dirac, Heisenberg... This is too much honour !
*The two “worlds” meet through the identity 0.99999999… = 1, connecting both DC and conventional mathematics.
Thank you for the big LOL. Here come back Achilles and his tortoise !
EG: "This seems to contradict Newton, Leibniz and Cauchy. How could they all be wrong? Yes, in maths they are right and MOSTLY coherent with each other, but do not correspond to what can be constructed in the mind."
Depends on the mind. E.g. AI has no mind thsu it cannot construct anything in mind.
EG: "it is time that mathematics accept that matter is not continuous "
It is not mathematics which claims what is matter and what are its properties. The aim of such "sensations" are oriented towardTOTAL DEPRECIATION of mathematics and should be forbidden to be exposed in public.
EG: "As a first axiom in Digital Constructivism that hopefully becomes self-evident -- objects may not exist physically, such as for lack of materials, but at least mentally they do have to exist (they must be imaginable)."
contradicts rejecting pi as non-costructable object, since - at least I - can IMAGINE this object at least mentally.
The whole "book" makes an impression as a list of wishes, without any reasonable and consistent correspondence with what it refers to, like philosophy by Brouwer, Berkeley, Martin-Löf etc. It even missed a logically acceptable definition of the key notion of Digital Constructivism. Of course it cannot be given since the logic is rejected as well by stating that they rejected the so-called law of the excluded middle.
Conclusion: Anyone who wants to read the "book" should first to say "good bye" to ANY logic. Indeed, there is no possiblity to state that the book is OK or not-OK since according to its "psedo-logic" it is possible that "it is not-OK and OK simultaneously", up to wish of the reader. THIS IS NOT an object of the "Digital Constructivism" since I cannot imagine this - even mentaly. Contrarily, I imagine what is 21/2 which in turn is not accepted by the Author to objects of this non-consistent notion called "Digital Constructivism".
Globally I have an impression, that all the stuff is discussied with an Artificial Intelligence, which - if true - should warn all followers, that we are manipulated by the creator of this machine, which - in next comments - I will call It (just in case) The Highest Artificial Intelligence (briefly: THAI).
@ Ed Gerk This discussion is turning into a farce . When I laughed about the return of Achilles and his tortoise, I meant to point out that after many circuitous empty speeches, you were again forced to come back to the central subject, viz. the approximation of reals by rationals. I was dumbfounded to see you upvote my LOL and seize it as a hint to get out of the « cubic root of 2 » predicament where IK had cornered you, and at the same time lift the paradox of Zeno. As usual, your verbiage consists in threading empty big words on a non existent string. A rational number maps 1:1 back to a set of integers : what is this map, what is this set ? You speak of numbers, but how do you make operations on your (undetermined) sets ? We can define a notion of "close" but not of "next": what is « close », what is « next » ? … Etc., etc. I’d waste my time citing all the meaningless blather which allows you to « conclude » triumphally that DC solves the so-called “Zeno paradox” by showing that it does not exist. The only counter-argument which comes to my mind is appended below.
All our previous « discussion » unfortunately suggests that you did not really read nor perhaps understand the preliminaries of the problem raised by Zeno ’s paradox. I interpret (since nothing is defined, what could I do else ?) your mumble-jumble about a pretended « DC identity» as the writing in DC style of an approximate value 0.99999…= 1. More precisely, write it as 1= 9[(1/10)+(1/100)+(1/1000)+…], and Zeno’s paradox can be reformulated as « how can the infinite sum between backets be equal to 1 » ? Since you seem to have no knowledge of what a scientific proof should be, I see no interest in any further discussion ./.
YES, infinitesimals can and should be eliminated from our calculus.
By taking the form e.g. Integral of [{ f(d+t)-f(d-s) }/{(d+t)-(d-s)}]
where d may be some midpoint, with t and s taking some range away from d.
And why would we want to do this: create a whole new calculus, based on discrete values? Based on the realization that the equations we use today have one and the same simple form; and, the physics that we observe today, are based on ONE natural principle. This one natural principle describes most interactions to be discrete. Thus do we advance the above change and many more changes described in the article below. Please see half way down in the article provided below for a discussion of the new mathematics.
Working Paper Summary of Physics & foundation for mathematics transpose
Please see article above for a discussion of the need to modify our calculus, our algebra and finite mathematics. Yet what do we see in this article? We see one form advanced to represent most equations we see. We see this one form to contain at least two components, a minimum ration and some constant squared. We see the above referenced discrete change in the calculus adopt a logarithmic form. We see this logarithmic form represent nature better as we identify two natural processes. We see changes in the finite and algebraic mathematics. Finally, we discuss and represent a master equation to "transform" modern equations to what we understand to be a "better," natural symbolic language.
I will follow Thong Nguyen Quang Do since for me also the comments and responses of the Author of this question seem present him as having no knowledge of what a scientific proof/discussion should be, and therefore there is no interest in any further discussion.
Joachim Domsta
Dear Ed Gerck,
Would love to talk with you further. I am so excited that you have come to some of the same conclusions, yet i am surmising that you come from a completely mathematical perspective? That is impressive. Wonderful. i have provided video demonstrations below you must see. And i hope that you will to read first, then rest, and finally reflect on the changes i describe in the article:
Working Paper Summary of Physics & foundation for mathematics transpose
Thank you for your critique.
Would you to consider nature in a new light, i wonder if you would relate the same message of theoretical and practical results of forward difference "instability?"
My concern is not so much with this initial change in symbolic form, for the results are perhaps the same as when utilizing the "infinitismal." However, it is in merely building a methodical, comprehensible symbolic model that represents nature BETTER.
In my "transform" i build a new variable, with both character and reference, as to not have both of these natural qualities, we experience a mis-referencing in our equations in three separate classes. Three. It is in this mis-referencing of variables that we create most of our problems today in the physics.
If i may continue?
In this new physical paradigm do we understand zero and infinity and one, not to be natural; e.g. we may draw a box around something in nature and say nothing exists, yet it (something) does. When we carefully review this new paradigm in the physics, what i call robust entanglement, we go even further still, to deny self referencing, and even to define all things as some trinity of hosts: two locales and some flux formed to include the very distance between found symmetries. As each entanglement occurs across any distance, we find infinity to not be required and unnatural.
In addition, in relating to you that this new paradigm understands all things to communicate, all atomic motion is so derived; then the algebra adopts but two processes, only two natural operators: adjudication & comparison. One to elicit character, the other to elicit reference.
I hope you will not dismiss me so quickly; for i am thinking that you may be immersed in a more traditional standard model of the universe. Although with your brilliant conclusions i wonder about this as well.
It was at the age of 17 that i understood and wrote the mathematics above; to form one common format of variables; one expandable equation. It was over the next thirty-five years that i wrote of the nature of physical interactions to include but one natural phenomenon, i call robust entanglement, RE. Here do we see that it is the shear distance, once incorporated into any flux, after symmetries form; that we see four DISTANCE classes of events and so mistake four different energies for ONE principle.
It is in eliciting four new energy species with vastly different morphologies, margins, and thus properties in an elastic universe, formed in EXTENDED entanglements that we begin to unravel the complexity we have known in the past.
However, I acknowledge that the very simple symbolic reference: Integral of [{ f(d+t)-f(d-s) }/{(d+t)-(d-s)}] MAY have flawed theoretical and practical results; but i wonder if but in an old paradigm? In a more traditional standard model of universe, d may represent some midpoint; though in my STANDARD model, new standard model, d represents the distance in each communication, each entanglement, each newly formed field or created "energy."
I will not attempt further to convince you of a more simplified mathematics; you may not have looked at the physics.
Here are five crude video demonstrations of what only RE theory can explain.
https://youtu.be/A-p0XquJvig
http://youtu.be/X5hCOEauY0o
https://youtu.be/DZsJnCym_d4
https://youtu.be/9-XUqwhvfCA
latest video demonstration
below
https://youtu.be/xPhRcmbEL5k
Kindest of regards,
Mark
Dear Christian Baumgarten
I think that you are trying to make this thread scientific. Sorry, but there is NO CHANCE. Sooner you will got to now that you are persona non-grata than the wrongly programmed mashine playing with us a game looking like a science will understand anything of science. So many incoherent meaningless claims, smashes of partial truths and lies by one representative of the AI is impossible to set right without being ourselves such machines, too. Are you an AI?
Best regards, Joachim Domsta
PS. Wouldn't it be usefull and interesting to open a thread with a question:
How to eliminate from scientific posts of RG ANY oratorium written by ANY pseudo scientist or a representative of AI?
Christian Baumgarten
I am not a mathematician, but the identity of 0.99999... and 1 does not appear to be a strong argument against the infinitesimals.
Of course it isn't an argument against infinitesimals. In floating point decimal representation of a number, every number could be represented in more ways. So what? We could say that infinite many equalities 1= 2/2=3/3=4/4.....etc. are "an argument" against rational numbers, but that would be wrong.
I think you should use some non-real numbers for infinitesimals. For example, I use 'epsilon' ($\epsilon^2=0$) for the limit. It is called automatic differential. So don't have to use limit or approximation. With this character, $\epsilon$ is the number(or whatever) closest to zero.
@ Christian Baumgarten, Issam Kaddoura You appeal to notions of density or measure - in the math. sense - to refute the new dazzling revolutionary theory of Digital Constructivism (DC for the happy few). I think these are futile efforts, because you rationally attack a closed system - in the pathological sense that it denies the very « reality » it pretends to explain. The only way, if there is one, would be to drill a hole in this strongbox of gargling certitudes.
A leverage point could lie in the proclaimed « identity » 0.999…= 1. The central question is : what does 0.999… mean and where does it live ? The suspension points … are a crucial problem. As I have already stressed, 0.999… means (9/10)+(9/100)+( 9/1000) +…, and Zeno’s paradox is essentially equivalent to the question why this infinite sum should be finite (not even to say equal to 1). A paradox is an apparent contradiction, but this one was beyond the grasp of the Greek mathematicians. It could be lifted only by appealing to differential calculus, and differential calculus takes place in R, whatever DC may shout. I recall the two main definitions of the real numbers so as to give two different lights :
.1) Roughly speaking, Cauchy’s completion defines real numbers as the limits of equivalence classes of Cauchy sequences of rational numbers. A short summarizing-all statement is that Q is everywhere dense in R. This introduces (and imposes) the notions of limit / convergence. This solves Zeno’s paradox because every graduate student knows that the « geometric series » 1+x+x^2+x^3+…+x^n+… converges to 1/(1-x) when the module of x is < 1. This is of course out of reach of DC, which denies the existence of real numbers, of continuity, of infinitesimals, hence of limits.
.2) The second definition of the reals is Dedekind’s theory of « cuts », which puts the accent on the order relation. A Dedekind cut (*) is a partition of Q into two disjoint non empty subsets A, B s.t. any x in A is strictly smaller (< ) than any y in B, and moreover A has no maximal element. The set of all Dedekind cuts will be called temporarily R’. From this very abstract definition, Dedekind was able to show that R’ can be given a field structure, i.e. can be endowed with addition, multiplication, division by a non null element, and with a compatible total order relation. From our point of view, the main theorem is that the usual R can be identified with R’, and in this process, Q is identified with the set of all Dedekind cuts associated to all subsets A of Q consisting of rationals x < than a given rational a ( thus we recover the 1 at the beginning of the discussion).This reminds somewhat of the mumbo jumbo in the introduction to DG (which pretended to solve Zeno’s paradox). In the best (or worst) case, this would contain DG. But this case is excluded since DG denies the existence etc., whereas Dedekind proves that Q is contained in his R’= R, and moreover is everywhere dense.
(*) NB: At the cafeteria of Braunschweig university - the one where Dedekind has been a professor for his entire career - they sell a sandwich called Dedekind Schnitt.
E.G., The magic of the golden ratio is that it is irrational.
The secrets of beauty in nature are related to the existence of such numbers.
What is the meaning of a new deformed model ( If exists!) that will destroy all such fascinating objects?
By the way, what is the alternative value of the golden ratio in the new revolutionary DC? I dare if you or any man can do.
A fiction? Essentially, it is true that whenever we notice an exceptional beauty and harmony (in nature), we will usually reveal the presence of golden ratio so one should not wonder why this concept, which connects mathematics, nature, science, engineering and art in a very unusual and interesting way, is present in all aspects of human life. Human aspiration is to be surrounded by structures and works pleasant to the eye, so it is logical to expect the magic of golden ratio to be found in the pores of mathematics, architecture, painting, sculpture, music and many other scientific disciplines.
Im affraid that if I believed only constructive math, I would never have advanced very far career wise.
Agree with Ferhat
All: Gentlemen, I am suggesting to close this thread.
This correspondence consumes a lot of energy and time (and produces a lot of trash in my Recycle Bin). I do not want to address things ad personam, but when someone is insufficiently educated in mathematics and denies its very foundations - it is too bad. Yes, RG is a platform for science, but not for pseudo-science. Think about it!
@Roman Sznajder You (and myself) suggest to close this thread, but unfortunately this is wishful thinking, because we cannot stop somebody who, although "insufficiently educated" in science (math and in physics) has embarked onto a psychotic crusade to "take control of math" (in his own words).
Ed Gerck I'm not sure my question was clear enough. Finite is unbounded. Finite can be as large as you want. There is no largest integer as we leave mathematics unbounded but this means it must always be incomplete. The set of rationals, integers etc must always be incomplete (Cantor/modern maths is inconsistent as the concept of an infinite set is inconsistent). If you wish to make things "complete" then you must set a bounded limit. What is your upper/lower bound for numbers? Does 100^100^100^100^100^100 exist? Is there a point when 100^100^100^... ceases to exist? √2 is a well defined number (the hypotenuse of a 1,1 right triangle). Its decimal expansion is tricky but I can define a lazy-evaluation which will deliver it to as many digits as you want. How many do you want? At what point do you decide the number no longer exists? What happens when I scale my ruler to make exactly the same triangle √2,√2,2. Which numbers exist now?
EG: " finite rationals, which is all we humans can measure or there exists in our mind "
No, since there are minds which contain much much much more than finite rationals.
No surprise, that stupidities like the quoted statement appear since it is possible that they are generated by an AI. There are some other possibilities, no-one of them, however
Christian Baumgarten
..2^64 or similar.
It is likely 263 -1, because one half must be spent for negative numbers and zero.
The "holy war" against real numbers is meaningless. The statement that the only existing things are those which can be constructed or imagined in human mind is wrong, to say the least.
Types are part of the problem. It is a common belief that rationals and irrationals are different. In the limit the rationals and irrationals converge to exactly the same thing - the infinite concept of continuity. The distinction works because our sets are always incomplete. Attempts to make mathematics "complete" make it inconsistent - countable infinity is inconsistent - it makes finite (unbounded) and infinite the same thing. There is no way around this. Mathematics cannot be both consistent and complete, and if it is not consistent it is not mathematics. In computing we have both "weakly" and "strongly" typed languages. Neither is better than the other. You pick the best tool for the job.
EG:>> DC led to a "first-quantization" and then to a "second-quantization",
Dear Joachim,
You forgot to mention this other "pearl" of pedantic ignorance (if I may say so): Einstein ended that when he explained the Brownian Motion, with discrete atoms and molecules hitting the pollen grain, for which a forgotten Frenchman later got the Nobel Prize, with many measurements. The explanation of the Brownian motion by collision of molecules was a banality in the atomic theory at the turn of the 19-20th century. Einstein's key contribution was his probabilistic (in a precise sense) interpretation, for which he was rewarded the 1929 Nobel prize. As for the forgotten Frenchman etc., he's no other than Louis de Broglie, 1929 Nobel prize, not at all forgotten for his theory of Wave Mechanics, a prelude to QT. I recalled all this in my post of 3 days ago, but perhaps "your" AI is equipped with an automatic censoring/forgetting/not answering function !
Roman Sznajder wrote: " All: Gentlemen, I am suggesting to close this thread. This correspondence consumes a lot of energy and time (and produces a lot of trash in my Recycle Bin). I do not want to address things ad personam, but when someone is insufficiently educated in mathematics and denies its very foundations - it is too bad. Yes, RG is a platform for science, but not for pseudo-science. Think about it! "
Ed Gerck added a reply 15 hours ago
RS: You realize you are going against the RG ToS.
This not acceptable within RG. Sorry, Roman, I didn't notice your very appropriate call to break this thread with your excellent argumentation. From now on I am supporting this appeal and call all RG members to ignore this pseudo-science by Ed Gerck . Since there is no chance to make the Author writing reasonable answers/claims/statements etc. the best choice is NOT PARTICIPATE IN THIS FARSE OF SCIENCE.
If you agree that 0=0.0000000... then you have to agree that 1=0.999999999..
Because 1 - 0.9999999... = 0.00000000...
Responding to a question by PM, that may be of interest. Why abandon continuity?
If light propagation were instantaneous, Newtonian physics would apply, and there would be no difference in clocks -- time would be absolute, simultaneity would exist, continuity would exist, each place would have a tag in time. That is contradicted by all experiments.
On another point, not only in RG some people still think that Newton was right. Everyone who still believes that continuity exists, including all conventional mathematicians, are using that failed model. Everywhere, though, is digital, quantum. Every single equation can be calculated by a computer, which do not use continuity.
Why would time be different? Not only it takes time to go from point A to point B, there is an essential granularity in doing so, with quantum jumps. Since integers can be mapped 1:1 to rationals, we can calculate using integers -- which are NOT continuous. This can be faster and more precise, goodbye Newton -- we stand on the shoulders of giants and can see further. As we advance, we make room for others... continuity does not exist, even mentally.
The quantum is not per se digital, it only is perhaps when
you come to measurement, or quantum computers.
Also you do not actually need numbers to imagine the continium. Better if you do not. You just use algebra.