I believe I have solved what was called the "most fundamental unsolved problem of physics" by Paul Dirac:
https://www.quantamagazine.org/physicists-measure-the-magic-fine-structure-constant-20201202/
"The fine-structure constant [...] has no dimensions or units. It’s a pure number that shapes the universe to an astonishing degree — “a magic number that comes to us with no understanding,” as Richard Feynman described it. Paul Dirac considered the origin of the number “the most fundamental unsolved problem of physics.”"
I've worked things out in Jupyter notebook and generated a PDF version as well:
https://bit.ly/ElementalCharge
https://bit.ly/ElementalChargePDF
The results are quite surprising, to say the least.......
Earlier work in progress:
https://bit.ly/RevisionOfMaxwellsEquations
The physical origin of the fine structure constant is as one of the coupling constants of the electroweak part of the Standard Model. What isn’t, yet, known is what is its complete flow under the renormalization group.
Already within quantum electrodynamics it’s possible to understand that the fundamental property of the fine structure constant is that it isn’t, in fact, a constant, but has a very specific dependence on the scale, which is what, indeed, encapsulates the physics of electrodynamics. What is true is that this ``renormalization group flow" is, still, not fully known.
One reason such precision in measuring the fine structure constant in atomic physics experiments is useful is that it is possible to perform calculations of processes in the Standard Model, where such precision matters in probing the existence of new particles.
I think there were extra-conductivity in my small battery experiments, but next day, it didn't work . First time, it was not exactly as expected, in two ways. Therefore, maybe not super-conductivity.
Stam Nicolis
Did you notice that in my new definition for elemental charge, there are two terms that both result in the exact same value?
It’s possible to come up with a huge number of expressions of this kind, they, still, don’t have anything to do with the origin of the fine structure constant, because they contain the electric charge as input. That means that they are just rearrangements of the fine structure constant-besides missing out on the fact that it flows.
Once more: What’s needed is the construction of the renormalization group flow of the fine structure constant, beyond perturbation theory. But that’s a hard problem.
"Already within quantum electrodynamics it’s possible to understand that the fundamental property of the fine structure constant is that it isn’t, in fact, a constant, but has a very specific dependence on the scale"
Stam Nicolis
Yep, that seems to be correct. The epsilon in the above equation defines the ratio of the big toroidal radius R to the radius of the hollow core a of the superfluid ring vortex.
Refer to eq. 1.2 in:
Article Dynamics of thin vortex rings
This reads:
v = Γ/4πR (ln 8R/a − 1/2),
or:
v = Γ/4πR (ln 8/ε − 1/2).
So ε varies with velocity and since that has turned up in my new definition for α, you are correct that it's not really a constant.
"It’s possible to come up with a huge number of expressions of this kind, they, still, don’t have anything to do with the origin of the fine structure constant, because they contain the electric charge as input."
Stam Nicolis
Nope, this is the equation to define electric charge (see screenshot posted above):
ρ h/m π R m c
e*e = ------------------------------
4π^3 √3 R (1 + 3/4 eps^2)^2
with:
ρ the mass density of the medium in [kg/m^3]
h Planck's constant in [kg-m^2/s]
m elementary mass in [kg]
c speed of light in [m/s]
R radius of elemental vortex ring in [m]
and eps(ilon) a dimensionless ratio of a/R, with a the radius of the hollow core of the elemental vortex ring.
In other words: only meters, seconds and kilograms as input, while electric charge squared as output in [kg/s] * [kg/s].
Once more: by dimensional analysis the fine structure constant is the only combination of electric charge-as defined by Coulomb’s law-the dielectric constant, Planck’s constant and the speed of light, that’s dimensionless.
So any claim of deriving it from something else needs to say which of the above constants is derived from something else. (Not to mention why is the dimensionless coefficient that could multiply it, equal to 1?)
The constructions involving spheres and so on can’t resolve this problem,since they don’t provide any insight into the origin of electric charge that thry claim to do.
What would change in the expression for e^2, given above, if any of the numerical constants were to take other values? What’s the physical principle that determines those values?
Arend Lammertink
I'll study your notebook paper. It doesn't print well on my equipment (the lines are too long). Do you have another (pdf preferred) version.
I really like your viscosity of the ether (my plenum) medium. I note the the equation that fits the observed value (it was developed first from measurement of the spectral lines) has 2 types of components 1 is the medium characteristics, he other is the ration of energy to wave frequency. In the STOE, h (Planck's constant) is the ration of energy to the fundamental matter particle. So, the FSC is the amount of plenum in the fundamental particle. That is why it is found so often.
Motion in a fluid affects the mass of an object, not its charge. The same thing occurs in classical electrodynamics, where the mass of an electric charge is ``renormalized" from electromagnetic radiation, not the charge.
And there’s a reason for it: At the level of the classical equations of motion, the value of the electric charge can be absorbed in the redefinition of the field, A-> qA and doesn’t appear in the equations of motion.
Within classical electrodynamics the fine structure constant just doesn’t make sense, because it contains Planck’s constant.
The fine structure constant only makes sense within quantum electrodynamics-where it’s not a constant, but depends on the scale.
So its physical origin is with quantum electrodynamics and its value at any scale can be deduced from its value at another, by solving the renormalization group flow equations.
And, Indeed, having measured the fine structure constant to be equal to around 1/137 at atomic energy scales, it's possible to predict its value at the mass of the Z, i.e. at 90 GeV, namely about 1/128-in agreement with the measurements at LEP...
What is not possible is to calculate the fine structure constant at any given scale, without knowing its value at another scale. One reason for that is that the flow equations are not known in full detail.
That the derivation of the fine structure constant, α,to be equal to around 1/137 in the thread’s preface is really evidently questionable; however that
“…So its physical origin is with quantum electrodynamics and its value at any scale can be deduced from its value at another, by solving the renormalization group flow equations.. ….”
- looks as rather questionable as well – really in quantum electrodynamics α is some ad hoc introduced mystic number that by some mystic way fits the theory with experiment; and so it simply is as it is in any “renormalization group equations”, including “flow” ones, without some additional derivations.
Real physical sense of the fine structure constant is explained only in the Shevchenko-Tokarevsky’s informational physical model
https://www.researchgate.net/publication/354418793_The_Informational_Conception_and_the_Base_of_Physics,
- where the physical sense of really transcendent/mystic in mainstream physics “particles” are explained – that are specific disturbances in Matter’s the ultimate base – the [5]4D dense lattice of [5]4D binary reversible fundamental logical elements [FLE], which are close-loop algorithms, “hardware” of which are FLEs, and which constantly always run,
- constantly so always moving in the lattice with 4D speeds of light,
- having the frequencies ω, momentums P=mc, energy P=Pc=ћω,
- the “space” length of a particle’s algorithm is its Compton length, λ=ћ/mc, the “logical” length of a particle’s algorithm, N=λ/lP, lP is Planck length;
- and, at application of the model to concrete fundamental Nature Electric force, in the 2007 initial model of Gravity and Electric Forces, see corresponding sections in https://www.researchgate.net/publication/365437307_The_informational_model_-_Gravity_and_Electric_Forces ,
- where, including, the physical sense of what is “charge” is explained: the fine structure constant is relative part of having electric charge particles logical algorithms’' lengths, i.e. the logical length of “charged FLEs”, ncharge: ncharge=α1/2N.
Including in this case in that
“…And, Indeed, having measured the fine structure constant to be equal to around 1/137 at atomic energy scales, it's possible to predict its value at the mass of the Z, i.e. at 90 GeV, namely about 1/128-in agreement with the measurements at LEP...…”
- there is nothing surprising. When in, say, proton algorithm the charged part of FLEs seems can be actualized without problems, but if a particle has mass 90 GeV, its algorithm’s length is in 90 times shorter, and so in this case some problems appear. And, besides, such particles algorithms have some defects, and so break in extremely short times in a few ticks, when the particle decays.
Cheers
John Hodge Made a PDF version available at: https://bit.ly/ElementalChargePDF
"Once more: by dimensional analysis the fine structure constant is the only combination of electric charge-as defined by Coulomb’s law-the dielectric constant, Planck’s constant and the speed of light, that’s dimensionless."
Stam Nicolis
As you can read, I started with the definition for alpha and re-arranged to a definition for e*e.
Then I entered an approximation for alpha of 1/(8 pi^2 sqrt(3), which is dimensionless.
Then I multiplied by R/R, m/m and pi/pi, which is also dimensionless, since multiplying both the denominator and the numerator by the same value and dimension. And both R and m are calculated from the 4 defined constants exactly as shown in the python blocks.
"What would change in the expression for e^2, given above, if any of the numerical constants were to take other values? What’s the physical principle that determines those values?"
Well, one would have to change one of the four fundamental constants in order to change the other values, since these are calculated from the four fundamental ones.
I've added a python script with only the code blocks, so you don't need to install Jupyter notebook to repeat the calculations and see what happens when you change something:
https://github.com/l4m4re/notebooks/blob/main/elemental_charge_and_the_fine_structure_constant.py
When I change the value of e in the fundamental constants section to 1.4e-19, we obtain:
Calculate R
R: 5.6e-26
angular momentum: 7.840000000000001e-45
Linear momentum 7.84e-45
Calculate the two terms in the definition of e*e
angular or magnetic: 1.4e-19
linear or dielectric: 1.8372872563575944e-19
Calculate epsilon and a
eps: 0.4405717514767858
a: 2.4672018082700004e-26
Calculate corrected definition of elemental charge
angular or magnetic charge: 1.4e-19
linear or dielectric charge: 1.4e-19
elemental charge: 1.4e-19
--
So, even when we change the value of charge, the radius of the vortex ring is calculated such that the term rho*(h/m)*pi*R (or epsilon_0*(h/m)*pi*R if you like) yields the exact same result.
Now R is calculated by R = e/(rho*k*pi), so what the calculation for the angular or magnetic component does is calculate:
e = rho*(h/m)*pi*e/(rho*k*pi)
e = (h/m)*e/k = e,
so that one will always result in e.
The other term, the linear or dielectric component, however, is a lot more complicated and requires a correction by the term we refer to as alpha.
"What’s the physical principle that determines those values?"
Stam Nicolis
The physical principle is that the medium behaves like a superfluid wherein quantized ring vortexes exist with a circulation equal to k, the quantum circulation constant. And the smallest possible and therefore most elemental ring vortex that can exist in the medium has a mass m = h/k of approx. 7.37e-51 [kg].
A rather interesting detail is to compute the Compton wavelength for a particle with that mass:
>>> lbd = h/(m*c)
>>> c/lbd
1.0000000000000002
>>>
Thus, the tiniest of the tiniest of particles, for which I calculate a radius of about 6.4e-26 [m], is supposed to have a wavelength of just about 300.000 km...
Superfluidity doesn’t have anything to do with electric charge, however. So it doesn’t have anything to do with the fine structure constant. It's not a consequence of electromagnetic interactions, but of Bose-Einstein statistics. The vortices in a superfluid don't carry any gauge charge, so they can't be identified with charged particles. That's why they can't describe electromagnetic interactions.
The idea that particles were vortices in a fluid, incidentally, was proposed by Lord Kelvin in the 19th century: https://en.wikipedia.org/wiki/Vortex_theory_of_the_atom
The electron as a membrane was proposed by Dirac, in 1962: Article An Extensible Model of the Electron
Unfortunately it doesn't work, for all sorts of reasons, some of which are mentioned in the paper.
Once upon a time the fine structure constant was thought to be a constant and there have been many attempts to associate some meaning to its numerical value. This is just numerology. What really mattered was that its value was sufficiently small that perturbation theory was reliable. There's nothing special about 1/137... as such.
Determining its numerical value experimentally is a background check for calibration purposes.
To derive it from something else, means deriving the electromagnetic interaction from something else. Superfluidity isn't the way, however, because vortices in a superfluid don't behave like electrically charged particles-people did get excited by that a long time ago, too.
And even quite recently, too: https://www.inc.uam.es/wp-content/uploads/Volovik.pdf
(Though the relation with electric charge, as mentioned by Volovik, goes back to Landau in 1955, that has its own problems...)
Unfortunately, significant issues are being glossed over in that presentation...
The confusion is that it isn't the numerical value of the fine structure constant that is ``a fundamental unsolved problem of physics'', as mentioned by Dirac and Feynman, but to deduce the electromagnetic interaction-described by its dimensionless coupling, that can be identified with the fine structure constant at atomic energy scales)-itself from a unified-quantum-theory (Kaluza, Klein and Einstein tried to unify electromagnetism with gravity, within a classical field theory; it didn't work.) It was that unified theory that Dirac and Feynman were after, not numerology.
Within the Standard Model, the electroweak theory, still, has two coupling constants, so it's not, in fact, a unified theory. (The electroweak theory has gauge group SU(2)L x U(1)Y, that gets broken to U(1)EM where U(1)Y is the group generated by the ``weak hypercharge''). There have been attempts at constructing ``Grand Unified Theories''; these haven't led to a conclusive result. There are too many possible theories and the experiments that could distinguish between them take too long.
One attempt, in another direction, was here: https://inspirehep.net/literature/89538 It's still unfinished business.
"Superfluidity doesn’t have anything to do with..."
Stam Nicolis
https://en.wikipedia.org/wiki/Macroscopic_quantum_phenomena#Superfluidity
Superfluidity is a quantum phenomenon on a macroscopic scale. Understand superfluidity on the macroscopic scale and you understand the medium on the quantum scale.
Same equations, different speed, density, viscosity, etc.
Really the same equations on a different scale.
And gravity isn't that difficult, either. Feynman actually gave a good description of the Biefeld-Brown effect aka anti-gravity, even though he did not explicitly say so:
https://www.feynmanlectures.caltech.edu/II_10.html#Ch10-F8
"As illustrated in Fig. 10–8, a dielectric is always drawn from a region of weak field toward a region of stronger field. In fact, one can prove that for small objects the force is proportional to the gradient of the square of the electric field. Why does it depend on the square of the field? Because the induced polarization charges are proportional to the fields, and for given charges the forces are proportional to the field. However, as we have just indicated, there will be a net force only if the square of the field is changing from point to point. So the force is proportional to the gradient of the square of the field."
After all, the medium is a dielectric since a non zero permittivity.
Not every quantum effect is related to electric charge. Superfluidity is a quantum effect that occurs without electric charge being involved at all. Electromagnetism is completely irrelevant for understanding it. Nor do vortices in a superfluid interact like electric charges. So there's nothing there that can provide any insight into how electromagnetism could emerge from something else; which is what is being discussed.
Scientific writing isn't Scripture, so there's no point in quoting texts as if it were so. The meaning matters, not the words. What Feynman wrote in his lectures doesn't have anything to do with the issue discussed, since he doesn't discuss there, whether the electromagnetic interaction comes from a unified theory-one reason being that, when he gave those lectures, no such theory was even imagined (beyond Glashow's 1961 paper.). (Nor does it have anything to do with either gravity or anti-gravity; not all attractive forces are gravitational, nor do gravitational forces, in general, repel. It's a consequence of Maxwell's equations that there exist two kinds of charges, that (a) are of opposite sign and (b) combine additively. The charges that interact with the gravitational field are masses, in the Newtonian approximation, the energy-momentum tensor beyond it and gravity is, only, attractive. Repulsive forces, associated with gravity, appear in supergravity, where, along with the graviton, with spin-2, that mediates the usual attractive force, there's, also, a spin-1 particle, that mediates a repulsive force. This force doesn't have anything to do with electromagnetism, because this spin-1 particle, in supergravity, isn't the photon. To describe electrically charged particles in supergravity requires coupling them to supergravity. This isn't how to get the electromagnetic interaction. To get it it's necessary to go beyond supergravity, as shown by Scherk and Schwarz.)
To explain the origin of the electromagnetic interaction means providing a theory that is unified at high energies and, at low energies, contains a massless spin-1 particle, that couples to matter and that can be identified with the photon. The coupling constant can then be identified with the fine structure constant at atomic energy scales.
The statement that the vacuum has a dielectric constant means that the field theory contains a field that creates massless spin-1 particles, that can be identified as photons, from their interaction with matter.
Indeed the electroweak sector of the Standard Model does this, even though it’s not a fully unified theory: It does describe how, starting with four spin-1, massless, particles, that are gauge bosons of the group SU(2)_L x U(1)_Y, it is possible to end up with three of them massive and one of them massless and how this latter one can be identified with the photon, that couples to the quarks and leptons with the electric charge of each. That’s how Dirac’s and Feynman’s question was answered.
Stam Nicolis
Don't have much time now, but let's put forth a simple question.
The electric field is defined as the gradient of the scalar potential Phi.
Now vector calculus says that the curl of the gradient of any continuously twice-differentiable scalar field is always the zero vector.
Yet Maxwell writes:
curl(E) = dB/dt,
which is obviously something other than zero for time varying fields.
Since they can't be both correct, the question becomes:
Which one of the two is incorrect?
Maxwell or vector calculus?
The static electric field is the gradient (up to a sign) of the scalar potential. The time-dependent electric field isn't.
There are four equations. div B =0 implies that B = curl A, therefore E = -grad Φ - dA/dt...
There's also curl B = J + dE/dt
The sources must satisfy the condition div J + dρ/dt = 0, that charge is conserved in a local way.
All four equations are equivalent to Box Φ=ρ, Box A = J upon imposing Lorenz gauge, div A + dΦ/dt = 0, that's required to ensure uniqueness of the solution, since gauge invariance implies that, if (Φ,Α) is a solution, so is its gauge transform.
If there's such a confusion about the properties of Maxwell's equations, discussing the origin of the fine structure constant is pointless.
The four Maxwell equations are completely determined by global Lorentz invariance, gauge invariance and the requirement that they don't contain derivatives of order higher than 2.
I had coils, several laps and as it turned out semi-isolated, with diameter 1cm. Upon applying deformations ,orthogonal which would curl more in other directiond , , up to ~2mm, orthogonal, that started to lead, at first , same oscillations as applied, and then by itself, together with DC. Latter on a scale of what size ?
It also charged the non-rechargable battery, somewhat. I'll write documentation elsewhere, also, in due course. Could adjacent temperature gradients be favourable , to trigger + - spin , and if so, in what material?
"If there's such a confusion about the properties of Maxwell's equations, discussing the origin of the fine structure constant is pointless."
Stam Nicolis
Root of the problem is that there is no unique first spatial derivative in three dimensions: we have div, grad and curl.
However, there is a unique second spatial derivative in 3D, the vector Laplacian, the generalization of d^2/dx^2. And together with the quantum circulation constant k, we can define the time derivative of any given vector field F in 3D as follows:
d[F]/dt = - k Nabla^2 [F].
This is the proper way to do it and it leads to uniquely defined potential fields, thus referring the whole "gauge fixing" idea to where it belongs: the trash can.
So, because in Maxwell equations the electric field is coupled to the magnetic field incorrectly by forcing the curl-free half of the Helmholtz decomposition defined by the vector Laplacian to have a curl and adding a curl-free diverging component (dE/dt) to the divergence free half of the Helmholtz decompositon, the whole thing got messed up.
So, what happens when you evaluate both halves of the Helmholtz decomposition correctly for an elemental vortex ring in a superfluid aether model?
Yep, you find the correct definition for elementary charge, with both the angular magnetic and linear electric componenents completely in balance such that the old mismatch between the coupling of the two components, quantified by the fine structure constant, vanishes and the fine structure constant is no longer just a number needed to straighten things out but actually follows from the properties of the elemental ring vortex, characterized by charge e, mass m, radius R and hollow core radius a.
Everything that follows ``Root of the problem'' is either wrong-when it makes sense-or meaningless.
There's no inconsistency in Maxwell's equations.
Nor is the presence of first derivatives in them a problem. It's just that, written in the ``conventional'' way, their symmetries aren't obvious and how to resolve the constraints they express, either. But neither issue is a problem, at least since Heaviside, though the way of making Lorentz invariance manifest took a bit longer. But that's history.
It's been known for more than a century that the equations for the potentials, that solve all the constraints and admit a unique solution are
Box Aμ = Jμ
(which is consistent, since Box dμΑμ=dμJμ=0, since the RHS vanishes as the current is conserved and the LHS vanishes due to the Lorenz gauge fixing condition)
where Box is the d'Alembertian operator, Aμ=(Φ,Α) and Jμ=(ρ,J),
from which the electric and magnetic fields can be obtained as
B = curl A and E = -grad Φ - dA/dt
There's no conceptual issue in using the information provided by the RHS, to find the LHS.
The equations for the potentials, of course, require initial and boundary conditions.
The electric charge is the volume integral of the density and can be shown to be invariant under Lorentz transformations.
This is now standard material in any course on electrodynamics. If this isn't understood, it just doesn't make sense pretending discussing the fine structure constant-which, once more, has, as its origin, the electroweak sector of the Standard Model-it's the dimensionless coupling constant of the photon-the spin-1 vector boson, that remains massless-to the quarks and the leptons.
The electroweak theory is the unification that Feynman and Dirac were referring to. It describes the scale dependence of the fine structure ``constant''.
Arend Lammertink
Thanks for the .pdf.
In the Scalar Theory of Everything (STOE), e (electric charge) is in the medium as you suggest. A positive charge is a cyclone type vortex. The 2 attract each other and like vortexes repel.
The fine structure constant (FSC) was initially a measurement of wavelength spectra which is energy. The denominator has a Planck's constant (h) which provides 1 unit of energy. So, the FSC is the amount of the medium (energy) in 1 unit of (matter) energy.
The energy derivation of the FSC involves a cycle, hence the 2 pi. But ``cycle'' is not considered a dimension. I have suggested it is a unit of measure.
Arend Lammertink
Given this discussion, you may be interested in a paradigm shift model of the universe:
Scalar Theory of Everything (STOE) unites the big, the small, and the four forces (GUT) by extending Newton's model
http://intellectualarchive.com/?link=item&id=2414
https://www.youtube.com/watch?v=0YlJGdTvuTU
Experiments show Maxwell's Equation need correcting:
Article Two different types of magnetic field
Article Magnetostatics relation to gravity with experiment that reje...
Article Another experiment rejects Ampere’s Law and supports the STOE model
The STOE suggests the elementary particle is a magnet. Therefore, the strong and weak forces are merely ad hoc creations to brace an old model. So, the structure of the particles when moving produce charge (vortices) into the medium. 1 other of Maxwell's Equation that is wrong is the grad B is = gravity (not 0). (paper in creation)
derived from
Article Magnetic Field Evolves to Gravity Field. Part:1 Repulsion
to part 5.
"There's no inconsistency in Maxwell's equations."
Yes, there is and it has experimental consequences. Maxwell's equations only predict the existence of one type of EM wave, the Hertzian "transverse" wave. In actual fact, there are three types of waves of which two are electromagnetic and one is Tesla's superluminal longitudinal wave that is pure dielectric and has no magnetic component.
So, what we have is:
1) The near field, a non-radiating EM surface wave that is an actual transverse wave propagating on the boundary of two different media;
2) The far field, the Hertzian "transverse" wave with that mysterious "wave-particle duality" and;
3) Tesla's superluminal longitudinal mode.
Steffen Kühn has done some excellent work on superluminal signal transmission:
Article General Analytic Solution of the Telegrapher’s Equations and...
Preprint Electronic data transmission at three times the speed of lig...
Preprint Experimental detection of superluminal far-field radio waves...
And the near field surface wave has been applied by Glenn Elmore for single conductor transmission lines:
Article Introduction to the Propagating Wave on a Single Conductor
This mode is very wide-band and very low loss, because non radiating and zero "real" current.
"Nor is the presence of first derivatives in them a problem. It's just that, written in the ``conventional'' way, their symmetries aren't obvious and how to resolve the constraints they express, either."
And this is the fundamental point where we disagree. From my perspective, their symmetries should obviously follow from application of the vector Laplacian, since that defines what I consider a fundamental symmetry between linear (electric) and angular (magnetic) motion, or dynamics if you like, in the form of the well known Helmholtz decomposition.
And it is that fundamental symmetry that is broken in Maxwell's equations because of the mixing of the circuit level Faraday law with the fundamental model of the medium by means of the introduction of the terms already mentioned.
And it is this broken symmetry that led to an unexplained error in the strength of the predicted field which was straightened out by the introduction of the fine structure constant.
And it is because of the discovery of the quantum circulation constant k, with a value equal to c^2 but a unit of measurement in [m^2/s], that we are able to describe the fundamental dynamics of space time in just two equations that are at a higher abstraction level than Maxwell's equations as well as Navier-Stokes for that matter:
[a] = - k nabla^2 [v] and
[j] = - k nabla^2 [a].
with [j] the jerk, the time derivative of acceleration [a].
Or:
[F] = - k rho nabla^2 [v] and
[Y] = - k rho nabla^2 [a],
with Yank the time derivative of force density in [N/m^3-s].
John Hodge
I agree with you that the strong and weak nuclear forces are questionable, to say the least, and can be fully accounted for by the electromagnetic forces. In that regard, I always refer to the excellent experimental work of David LaPoint, who shows this in his laboratory:
https://www.youtube.com/watch?v=siMFfNhn6dk
I don't necessarily agree with some of his explanations and theories, but his experiments are well worth the time watching.
Arend Lammertink
Thanks for the link.
The experiments are with plasma. The explanation of diffraction is all animated - no experiment.
If I understand: each of his ``bowls'' is an assembly of magnets with their axis perpendicular to the surface of the bowl. One with N pole inward, the other with S pole inward. So, it's not fundamental.
In part 3, he describes edge diffraction incorrectly. I suggest the actual experiment of edge diffraction rejects his model. That is, a single edge produces the diffraction pattern, he uses 2 edges to get diffraction.
While creating the STOE model of diffraction, I created a toy computer simulation of the model of photons. The next step was to find an experiment which could support the STOE model while rejecting all other models. This is:
Interference Experiment with a Transparent Mask Rejects Wave Models of Light
Optics and photonics journal vol. 9, No. 6 jun 2019
https://www.scirp.org/journal/paperinformation.aspx?paperid=93056
https://doi.org/10.4236/opj.2019.96008
https://www.youtube.com/watch?v=qFDB-K_sSjU
https://www.youtube.com/watch?v=A07bogzzMEI
The following is the initial program:
http://www.intellectualarchive.com/?link=find#detail
There is a issue with the concept. That is what keeps the hod magnets apart? A second issue is the count of the hods in a photon must be in the thousands. Yet, the interference pattern suggest afar fewer number. So the suggestion in his part3 is that each hod interferes with others in groups.
Arend Lammertink
This is additional to the ``bowls'' as an effect of flowing plasma FIRST hen the bowl form.
https://www.youtube.com/watch?v=3w8JTngaQLo
Took me a while to find it.
John Hodge : I particularly like David's various demonstrations with steel balls. Especially the vibrations from adding of more steel balls and self organizing behavior being observed on a macroscopic scale.
On a sidenote - when one likes to see how such steel balls and their shown primarily hexgonal magnetic auto-arrangements evolve to a direct fractal geometry deriving constants of nature basically out of nowhere guided by simple multiplicative integer geometric rules and constraints of iSpace theory, please just have a seat with some popcorn and watch my recent introduction to iSpace theory (90min. for near absolute newbies, that is) and both for the ones new to iSpace theory as well as physicists without any knowledge of mainstream physics on the deep inner mechanics and system immanent dependencies of different constants of nature in unit system as defined by SI and others in the past, evolving to iSpace-SI and iSpace-IQ unit system able to explain all these geometrical mechanisms and more so the hidden inter-constant dependencies and relations from first principles:
https://www.youtube.com/watch?v=EhGeANkwUME
Scientists who still prefer to take the time and actually *read something* before they comment into interesting subtle questions (like this one) are please inclined to start reading the following documents of mine on RG and *especially please* before stating all this would be "completely impossible":
https://www.researchgate.net/project/iSpace-Exact-quantum-geometric-value-of-primary-constants-of-nature
https://www.researchgate.net/publication/271854188_iSpace_-_NKS_in_a_Spiral_Honeycomb_Mosaic https://www.researchgate.net/publication/271854024_iSpace_-_Deriving_G_from_a_e_R_m0_and_Quantum_of_Gravitational_Force_Fg https://www.researchgate.net/publication/271854031_iSpace_-_Exact_Symbolic_Equations_for_Important_Physical_Constants_-_SI_Table https://www.researchgate.net/publication/361800687_iSpace_-_Quantization_of_Time_in_iSpace-IQ_Unit-System_by_16961_iSpace-Second
See my paper at Res Gate "Sommerfeld constant" as a characteristic of the gravitational field. (in russian )- "Постоянная Зоммерфельда" как характеристика гравитационного поля / connection with the movement of the ether - the cause of gravity/
Vladimir A. Lebedev
I presume you refer to this paper?
http://scicom.ru/files/journals/piv/volume36/issue2/piv_vol36_issue2_18.pdf
Article "Постоянная Зоммерфельда" как характеристмка гравитационного поля
While I can't read the Russian, I can read the abstract in the pdf and share my views on the gravitational force.
As I've already commented a few posts above, I consider the existence of the weak and strong nuclear forces to be very questionable, because from the work of David Lapoint it seems obvious these can be fully accounted for by electromagnetic forces.
So, if we want to come to a TOE, the only obstacle left is the gravitational force, since the EM forces are now fully explained because we can now describe the fundamental dynamics of space-time itself in just two equations, because of the discovery of the quantum circulation constant. See my comments further up in the discussion.
I have the radical view that there is only one medium, the superfluid aether, and therefore only one fundamental force of nature, which would be what we consider the electromagnetic, and therefore that gravity must be an effect caused by the electromagnetic fields.
My current hypothesis is that the gravitational force is proportional to the square of the electric field, given the following description of the Biefeld-Brown effect aka anti-gravity by Feynman, even though he does not explicitly say this is the effect he is describing here:
https://www.feynmanlectures.caltech.edu/II_10.html#Ch10-F8
"As illustrated in Fig. 10–8, a dielectric is always drawn from a region of weak field toward a region of stronger field. In fact, one can prove that for small objects the force is proportional to the gradient of the square of the electric field. Why does it depend on the square of the field? Because the induced polarization charges are proportional to the fields, and for given charges the forces are proportional to the field. However, as we have just indicated, there will be a net force only if the square of the field is changing from point to point. So the force is proportional to the gradient of the square of the field. The constant of proportionality involves, among other things, the dielectric constant of the object, and it also depends upon the size and shape of the object."
After all, the aether is a dielectric since a non-zero permittivity.
Paul Stowe did some work on this idea in this paper (eq. 35-44):
https://vixra.org/pdf/1310.0237v1.pdf
The good thing about Stowe is that he often has great ideas which inspired much of my work, but his equations are often hard to follow and sometimes formally incorrect, but one eventually is able to extract the ideas behind the equations and build upon those. I think it's fair to say much of my work comes down to the debugging, clarification and refinement of Stowe's foundation. And actually, it was Paul Stowe who pointed me to the above quote from Feynman, without which it's hard to understand what he's trying to say with the mentioned equations.
Arend Lammertink
I thought magnetic field rather than electric field as primary. It is easier to understand the structure of the atomic nucleus. Further, gravity is a result of asymmetry in the strengths of the poles - one is stronger than the other so the excess evolves to gravity:
Article Magnetic Field Evolves to Gravity Field. Part:1 Repulsion
I'll read Stow's paper, sounds interesting. Just a glance suggests there is no attempt to explain problem observations or ad hoc additions to current models. This is a prime motivation to me for developing a paradigm shift. I started (20+ years ago) with the astronomic/cosmological problem observations and evolved into the Young's experiment simulation including an experiment that rejects the wave model of light (particles).
One issue with field models in general is ``What is the mechanism of exerting a gradient of the field onto particles?'' Why this is a problem is that the variation an electric (or other single source field) Obeys the spherical principle (inverse distance density). The gradient then is inverse distance squared ON A SURFACE. So, how does a 3D object react as if the cross section of the object is the important part? This is seen in the metalicity (Z upper case) vs radius of galaxies.
Yes. watching the steel balls become magnetic then form rings begs an explanation - What is it? As far as relating it to atomic structure, it is unlikely - the magic numbers (2, 8, 18) for the number of electrons in a completed shell are unexplained.
You have several papers on RG. Which approaches a TOE?
Earlier work-in-progress is now added to the top post:
https://bit.ly/RevisionOfMaxwellsEquations
To complement: A quantum-optics deconstruction of Maxwell Theory of Electromagnetism and the Kinetic Theory
Preprint Quanto-Geometry - Vol II - Chapter 8: A Quantum Optics Decon...
John Hodge
Stowe is definitely interesting, but I found it hard to comprehend without context. Perhaps it would be best to begin with his 1996 work, which I've annotated:
http://tuks.nl/wiki/index.php/Main/StoweFoundationUnificationPhysics
Also, I've collected quite some usenet posts from him, which are helpful in finding the context of what he is talking about:
http://tuks.nl/wiki/index.php/Main/StoweCollectedPosts
I've been studying his work on and off for years and it was only like two weeks ago that I understood how he got the value of e, the "This term e, becomes ±2P/r in a torroidal topology" in his 1996 article.
Once I understood that, I was able to come to the explanation of the origin of the fine structure constant, the subject of this discussion.
Let me share part of the email conversation with him about this:
"I think we need to clarify terminology, there is, to me, the lattice level (supersolid/vortex sponge) and the underlying 'fundamental level if the basic media (which is a basic kinetic media)."
I got that.
The problem is that the idea of a basic medium consisting out of some kind of quanta bumping on to one another is inadequate for describing the fundamental level of the aether c.q mass-space-time, because it reduces the complexity of 3D motion governed by the vector LaPlace operator to essentially linear motion c.q. linear momentum, at least in the way you do. I guess it would be possible to give the quanta a "spin" as well, but even then you are still discretizing the medium itself, while we now know that it is the vector LaPlace operator that governs the dynamics of the medium itself and since that is a differential operator, the medium itself should be considered as a continuum.
However, because circulation is quantized by the quantum circulation constant or kinematic viscosity k, there is some kind of quanta without which no motion is possible. And it appears that these fundamental quanta *must* be vortex rings, because that is the natural way to work out the vector LaPlace operator.
So, what we got is:
a) a fundamental, continuous medium;
b) elemental vortices with a circulation equal to k moving within the medium.
On top of that, there may or may not be a lattice / supersolid / vortex sponge, either filling all space or more like structuring matter on the atomic scale or something.
"I was surprised that L turned out to be 6.48E-08 meters."
And rightly so, I would say.
"My premise was to compute the basic 'action' of the lattice assuming Planck's constant was it and the characteristic divergence was q. The basic kinetic action in a media is the length (L) traveled between interaction multiplied by the momenta (P) of the interactor. Since 'each' interactor with travel L the result is 2 PL... I just solved for P & L."
I think I now understand where you got the "This term e, becomes ±2P/r in a torroidal topology". The 2/r is the surface area of a ring vortex divided by its volume....
There are two problems here. First of all, Planck's constant is about circulation c.q. rotation, given the fact that quantized vortices in superfluids are characterized by a circulation Γ = h/m. Sure, its unit of measurement is the same as for action, but with circulation the length has to do with a line integral around a *closed* surface, while with action the length has to do with motion of a particle/object along a curve with a certain length.
OTOH: I'm also using the quantum circulation constant, which is equal to h/m, so it does seem to have something to do with compression/decompression as well.
Secondly, divergence has to do with stuff moving in and out of some volume, and thus has to do with compression/decompression of the quanta or ring vortices themselves. You can't just equate that to the movement of (the momentum of) a vortex ring along a certain spacing distance. Also taking the small radius r equal to path length l, while there is also the big radius R, seems problematic to me as well.
However, it is an intriguing problem how to properly relate Planck's constant and charge q.
What is interesting is that for calculations with circulations, you don't need a volume. All you have is line integrals and surface areas, while for calculations involving divergence, you do need a volume. This is one of the peculiar things about rotation/gyroscopic motion and longitudinal / translational motion.
What is interesting, is that this is also reflected in the following two ratios:
Γ = h/m, which is in [m^2/s]
and
X = q/ρ, which is in [m^3/s].
What is also interesting is that the momentum, of a vortex ring is given by:
P = ρ Γ π R^2
And this should be equal to m [v].
I see I forgot to tag Stam Nicolis in this reply. Already wondered why there was no response...
"There's no inconsistency in Maxwell's equations."
Yes, there is and it has experimental consequences. Maxwell's equations only predict the existence of one type of EM wave, the Hertzian "transverse" wave. In actual fact, there are three types of waves of which two are electromagnetic and one is Tesla's superluminal longitudinal wave that is pure dielectric and has no magnetic component.
So, what we have is:
1) The near field, a non-radiating EM surface wave that is an actual transverse wave propagating on the boundary of two different media;
2) The far field, the Hertzian "transverse" wave with that mysterious "wave-particle duality" and;
3) Tesla's superluminal longitudinal mode.
Steffen Kühn has done some excellent work on superluminal signal transmission:
Article General Analytic Solution of the Telegrapher’s Equations and...
Preprint Electronic data transmission at three times the speed of lig...
Preprint Experimental detection of superluminal far-field radio waves...
And the near field surface wave has been applied by Glenn Elmore for single conductor transmission lines:
Article Introduction to the Propagating Wave on a Single Conductor
This mode is very wide-band and very low loss, because non radiating and zero "real" current.
"Nor is the presence of first derivatives in them a problem. It's just that, written in the ``conventional'' way, their symmetries aren't obvious and how to resolve the constraints they express, either."
And this is the fundamental point where we disagree. From my perspective, their symmetries should obviously follow from application of the vector Laplacian, since that defines what I consider a fundamental symmetry between linear (electric) and angular (magnetic) motion, or dynamics if you like, in the form of the well known Helmholtz decomposition.
And it is that fundamental symmetry that is broken in Maxwell's equations because of the entanglement of the circuit level Faraday law with the fundamental model of the medium by means of the introduction of the terms already mentioned.
And it is this broken symmetry that led to an unexplained error in the strength of the predicted field which was straightened out by the introduction of the fine structure constant.
And it is because of the discovery of the quantum circulation constant k, with a value equal to c^2 but a unit of measurement in [m^2/s], that we are able to describe the fundamental dynamics of space time in just two equations that are at a higher abstraction level than Maxwell's equations as well as Navier-Stokes for that matter:
[a] = - k nabla^2 [v] and
[j] = - k nabla^2 [a].
with [j] the jerk, the time derivative of acceleration [a].
Or:
[F] = - k rho nabla^2 [v] and
[Y] = - k rho nabla^2 [a],
with Yank density the time derivative of force density in [N/m^3-s].
And eventually it is also because of this discovery that we are able to redefine elemental charge in such a way that we can restore the broken symmetry and calculate the fine structure constant accurately.
Nothing in the fundamental physics of nature can be ad hoc. Else the finding, useful though it might be, is not fundamental. That is why the foremost pursuit in Theoretical Physics is the derivation of the fundamental physical constants from first principles. Yes, Dirac had stated that the most fundamental unsolved problem in physics is the theoretical derivation of the fine structure constant. The Solvay generation well understood the overarching importance of the fundamental physical constants in physics. That is what is behind more than half of a century in the pursuit of the derivation of the electron gyromagnetic ratio or moment thru QFT, theory of which P. Dirac was one of the artifices and a critical advocate at the same time.
Sorry to say that there is nothing in Maxwell Theory of Electromagnetism, despite its long-lived influence in physics that is fundamental. First it has not been able to derive the most fundamental quantity in a cosmological view of physics which is c. Second, to explain c, it had to invent two ad hoc arbitrary constants, epsilon and mu, which many erroneously think are true qualifiers of the vacuum. Pure and legitimate theory has it that they are not, and describe nothing at all about the vacuum. The basis relation c = 1/sq rt (mu x epsil) is an artificial and arbitrary relation designed to construct the formalism of the Theory on apparently stringent mathematical footing. This relation does not derive from first principles. Lastly, and to make this short, Maxwell does not tell us why there is in existence no magnetic monopole, to accompany the unit of charge, which his theory has taken on itself to unify to one another.
I discuss these and many other issues issues at length here:
Preprint Quanto-Geometry - Vol II - Chapter 8: A Quantum Optics Decon...
The services rendered to physics by Maxwell Theory are for me strictly at the engineering level, but not Theory, despite the appearance. And of course you probably cannot redress the Theory by building on the erroneous constants of permittivity and permeability of the vacuum, of its perusal. My take.
Best.
Joseph Jean-Claude
You may want to read my previous post, just above yours.
With the discovery of the quantum circulation constant k, with a value equal to c^2 but a unit of measurement in [m^2/s], we have found a new first principle that defines the coupling between space and time in three dimensions.
Because this matches exactly to the dimensionality of the vector Laplacian, which is in [/m^2], we can actually define the time derivative of any given vector field [F] in three dimensional space time as follows:
d[F]/dt = - k nabla^2 [F].
And from there, everything drops into place one by one, including the theoretical derivation of the fine structure constant as shown in the exercise shared in the topic post.
Arend Lammertink - I did read your previous post. And also the content exposed thru the Jupyter notebook. Let me first say that I do welcome findings and contributions by other researchers and I am not bent on criticizing and opposing as a matter of instinct, like so many. I must say however that my personal interest is in verifiable/falsifiable physics and mathematical-physics that straightforwardly leads to or materializes theoretical derivation of the experimental fundamental constants.
First, I do not agree with anything superluminal. Why? Because the invariance and value of c are both a direct result of the 3-dimensional configuration of the physical observable universe. In the Quanto-Geometric framework, I amply show how this first-principle derivation works on stringent mathematical-physical grounds. Direct response to Martin Rees proposal that to derive D = 3 is non-trivial in physics, in other words one ought to show for a foundation of physics why our universe rests on a metric of 3 degrees of freedom, not 4, 5, or 10… In this context, superluminal requires D > 3, and it's not the FLRW metric.
Second, Time is not part of the intimate metric of the natural world. Any physics built on time as a metric variable is doomed to be traumatic and must prepare for a tumultuous life. The only way to avoid this fate, in my book, is to show in unequivocal manner its provenance in the natural world, and unearth/discover the natural unit of time.
Third, the inexistence and ultimate ban of Time is an unintuitive notion to many, while the best minds in physics since Einstein have well realized that Time is a problem. The bigger problem has always been how to pull physics that proposes full description of universal dynamic systems without the use of the time variable. Yes, this program is possible, and I show here how:
Chapter Quanto-Geometry - Vol III - Chapter 3: Time-Free Description...
I will continue to read your writings and personally encourage your efforts.
Best
- Joseph
"I did read your previous post."
Joseph Jean-Claude
Sorry, I assumed that because you criticized Maxwell, you missed my criticism on Maxwell in that post.
But let's address this:
"First it has not been able to derive the most fundamental quantity in a cosmological view of physics which is c."
"First, I do not agree with anything superluminal. Why? Because the invariance and value of c are both a direct result of the 3-dimensional configuration of the physical observable universe."
The views you express here point directly to Einsteinian relativity, which is based on the Lorentz transform, which on its turn is one of the consequences of Maxwell's bug:
https://etherphysics.net/CKT4.pdf
"It was the mistaken idea, that Maxwell’s equations and the standard wave equation should be invariant, which led, by a mathematical freak, to the Lorentz transform (which demands the non-ether concept and a universally constant wave-speed) and to special relativity."
Because we now have a model without Maxwell's bug, because of the direct application of the vector Laplacian on the velocity field [v], and we know that the wave equations that can be derived from such a fluid/gas like model will transform nicely under the good old Galilean transform, the bottom drops from underneath all theories based upon the Lorentz transform, in Thornhill's words a "mathematical freak".
Also, since the quantum circulation constant k together with the vector Laplcaian defines the dynamics of space-time itself, it is clear that the quantum circulation constant is more fundamental than light speed c, even though its value is equal to c^2.
Arend Lammertink - Also, since the quantum circulation constant k together with the vector Laplcaian defines the dynamics of space-time itself, it is clear that the quantum circulation constant is more fundamental than light speed c, even though its value is equal to c^2”
1.- How can we trust the “quantum circulation constant k”? Remember that the first constraint on constants is that they must result from experimental measurement as coefficient of proportionality between two real physical variables, for the most part. In fundamental Theory, we cannot introduce constants ipso facto, lest they are viewed and must be viewed as arbitrary quantities aimed at tuning for other desirable quantities or constants. There may be value in such quantities but they do not bolster fundamental theory.
2.- From mathematical logic, if c = k^2, then k is primordial and has priority or preponderance over c. But if k = c^2, then the radix is c and c is primordial and more fundamental than k. Because k is a composition of c, in that event. So the math is and must be the screen or the prism for the Laplacian.
3.- I fully agree with the criticism about the Lorentz transform. It is a “tour de force” in order to justify the use of the time variable following the realization of the polemic fact that one same event can never have two different time durations , if time flow and duration are real in and of themselves (thought experiment of the light ray’s round trip in a moving train). Even if you stretch time for the external observer, you still have the problem that to the guy in the train who observes the travel ray as a round trip, time for him would have to remain real (or proper) and different from the external observer’s measured time.
My take.
- Joseph
The number 137 has baffled scientists forever. I too have my own interpretation
137
Adding 1 and 7..8
Subtraction of 1 and 7 is 6
Multiplying 1 and 7 is 7
8+6+7+3
=24
137
2,4. ... When we subtract adjacent nos.
4,10...when we add adjacent nos.
37..when we multiply adjacent nos 1 and 37 .
2,4,4,10
~2,4,4,2,5 excluding one 2, add the rest is ~15
15-2 is 13
Now, 37-13 is 24
"In fundamental Theory, we cannot introduce constants ipso facto, lest they are viewed and must be viewed as arbitrary quantities aimed at tuning for other desirable quantities or constants. There may be value in such quantities but they do not bolster fundamental theory."
Joseph Jean-Claude
Yes, you have a good point here.
In essence, both k as well as c are arbitrary numbers that define a relationship between the units of measurement we have chosen for length and for time. And in fact, we don't even know if the speed of light can be considered to be constant even in the vacuum, which I consider to be filled with a medium called aether that appears to behave like a superfluid such as Helium at near absolute zero temperatures.
One of the consequences of such a view is that I must have an alternative explanation for the negative result of the famous Michelson-Morley experiment. This experiment was based upon the idea that the gravitational force was a force that propagated independently from the electromagnetic forces and it was assumed the aether was stationary and thus it was assumed that the movements of planetary bodies through the aether would result in disturbances of the aether, which would influence the speed of light and that should be detectable:
https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_experiment
"The experiment compared the speed of light in perpendicular directions in an attempt to detect the relative motion of matter through the stationary luminiferous aether ("aether wind")."
When that turned out not to be the case, it unfortunately was the aether theory that was abandoned, while the incorrect assumption that the gravitational force is a force that is independent from the electromagnetic forces is still the prevalent view today.
So, if there's no magical gravitational force that can propagate without a medium, we must be able to explain the movements of planetary bodies in such a way that we can also explain the negative result of the MM experiment.
And the only way I see to be able to do that is to assume that there are vortices in the aether and that at the surface of a planetary body there is no speed differential between the aether and the planetary body. In other words: we must assume that the aether is not stationary at all, but moves along with planetary bodies and vice versa.
And, in the case the situation occurs that some body moves relative to the local aether, a resistance should result such that eventually the difference in speed relative to the local aether dies out. Thus, one would not expect a planetary body to move relative to the aether, for if it once did a long time ago, such differential would decay because of this resistance.
Paul Stowe did some calculations a while ago with respect to the so called Pioneer anomaly and showed that the observed deceleration is indeed explainable this way (eq. 44):
https://vixra.org/pdf/1310.0237v1.pdf
"Consider the Pioneer spacecraft moving at ~12.5 kps. Its deceleration in this model is therefore predicted to be 8.4E-10 m/sec2. Within experimental error this value matches the actual observed deceleration of both spacecraft."
Now because of the existence of (irrotational) vortices in the aether implies the existence of a pressure gradient, this on it's turn implies the mass density of the aether to be non-uniform, which on it's turn implies the speed of light to be variable and not constant.
Whether or not we would measure the speed of light to be different across the Universe, however, is a totally different question. Even though we can reject the relativity theory because the reason for its existence, Maxwell's bug, no longer applies, clocks that are in orbit definitely run at a different speed than on the ground. This is shown by the GPS system, according to Ron Hatch, who held like 30 patents on GPS and was one of the most outspoken critics of the relativity theory:
http://www.youtube.com/watch?v=VOQweA_J4S4
Arend Lammertink “In essence, both k as well as c are arbitrary numbers that define a relationship between the units of measurement we have chosen for length and for time. And in fact, we don't even know if the speed of light can be considered to be constant even in the vacuum,”
No, k may be arbitrary but c is not arbitrary. Speed of light c is the most fundamental constant of motion that exists. There is a history of measurement of c with a whole lot of different methods. Because the methods are different in nature, that tells us that there is something to the quantity that almost places it at the rank of a mathematical ratio or a scalar. Second, the constancy of that quantity takes it completely out of the realm of physical arbitrariness or otherwise, and outright makes it a scalar in nature. CERN’s experiment since the 1960's: shoot a pion particle at almost the speed of light v and cause it to emit a photon particle while in travel and measure the travel speed of the photon. It is measured to be c, not c + v. If the aether as a fixed referential medium existed, it would have to be c + v. Thus, the mathematical logic of Reductio ad Absurdum invalidates the concept of an aether.
Further, you say that you envision the aether as made up of vortices. If that is the case then it is not a medium, even less an absolute medium, which it would have to be to have referential value. How do we know that all these vortices have the same spin, in value and orientation? At a minimum this aether must be a dynamic system with inner physical variables. And then we are still left with the problem of how to qualify those variables in the absence of a universal metric, if we should follow the construct of a flat metric that is behind the aether notion.
So the case is clearly made for Einstein’s Theories of Relativity, stressing the tensorial and curvilinear/curviplanar nature of the vacuum, and which I agree with you, are not perfect. But I hope that Ron Hatch knows that without relativistic computations, we could not coordinate the humongous amount of planes flying the skies on a global scale and correctly compute all the time their destination arrival time to meet the exact expectation of the awaiting public. Let alone Mercury’s orbital anomaly… etc.
Best.
This is an inteeresting line of thinking. I recently looked at the FSC. It is a jumbled mess of equations (light speed, other constants, Planck length).
My own line of inquiring is information physics, and in my universe of bits or ct1 states which are essentially bits, this electromagnetic measurement appears to work fairly well as the folding and unfolding of information at the level of the electron. It was not an exact match, but if you are using 137 and not the rmost closely measured result then it works veery well. Again dealing with information physics it is just a reflection of folding of information at one level and there are constatns at other levels with less folding but it can be made to fit the modeling in one set of circumstances and at the level of folding and unfolding where we find ourselves at this moment in the universe. This makes any constant reflect a net effect suspect. Most of this work is too recent to have a post-link ready, but if anyone here has any interest in looking at things this way, i can put something together although it would largely be a matter for critique and to search for problems with the relatiively simple logic involvd.
Greg Friedlander Erik Verlinde published an interesting paper in that direction a while ago:
Article Emergent Gravity and the Dark Universe
I would say the article is on the right track. It may be more right than what information physics suggests. I would propose that it skips a few steps. If we assume the universe is the output of some sorrt of information processor operating in a dimensionless environment, the conclusions I reached are (1) the initial folding of information results in gravity (and unfolds as anti-gravity or dark energy depending on how you want to look at it); (2) This automatically gives rise to two dimensional space from one dimensional information, at least under the observed mathematics of the universe and (3) time arises as stop frame animation some ways down the line (hence time is an effect of space, more precisely from the building of dimension) although there does appear to be something which I would call quantum change which is a lot like time in that it powers things, so to speak.
I use the term initial folding loosely since the nature of iterated equations suggests they arise from some prior folding and things like pi have multiple iterated equations from which they are built. It also raises the qustions of where is this information processor sitting, from what dimensionless perch does it spew out our universe. The math behind the universe appears relatively easy, but the ability to do the calculations, remember them and project the results suggests something esoteric. One aspect of taking time out of the mix is that this alleged information processor (computer if you would prefer) does not have to worry about how long things take since it exists free of the constraints of time and since there is no time, as such, it does not have to worry about heating as such. In jest, i would say perhaps the best articles on it could be found in religious texts; but the goal is to explain it in terms which allow it to be controllled, or at least understood.
Thanks for your views. In mine, the origin of the fine structure constant is from the ratio of (electron to light quark mass) to the 3/2 power. That is, a ratio of the fundamental matter particles. Reference: (9) Yet another interpretation, and (brief) derivation, of the fine structure constant | LinkedIn
Arend Lammertink
from your reference to Paul Stowe. IMO
He proposes a model which results in ``deriving'' the current standard model with no possibility of addressing the problems ( asymmetric Rotation curves, periodic redshift, etc.). He proposes gravity results from the electric field - many others have suggested this because of the similarity of coulomb equation and gravity equation. These don't work. I confess I don't follow many of his derivations.
However, I have suggested the electric field is vortices caused by movement of particles in an plenum (ether).
"No, k may be arbitrary but c is not arbitrary."
Joseph Jean-Claude
They are arbitrary in the sense that the unit of length we have chosen to work with is, in essence, an arbitrary choice. And there are different choices, like the yard, the foot, etc. The same goes for the second, we chose to relate that to the rotation of the earth, which we have divided into 24 hours, 60 minutes and then 60 seconds, but we could have made other choices.
"Speed of light c is the most fundamental constant of motion that exists. There is a history of measurement of c with a whole lot of different methods. Because the methods are different in nature, that tells us that there is something to the quantity that almost places it at the rank of a mathematical ratio or a scalar."
It does indeed seem that locally one will always measure the same light speed and that there is indeed an intricate relationship between the local motion of the medium and our measurement of time.
There is two ways to interpret this effect. One is that it is time itself that is relative, the other is that the ticking rate of the clock, the device we use to measure time changes.
My view, which I believe is also what Ron Hatches view was, is that it's not time itself that's relative, but rather that the ticking rate of the clock we use to measure time varies in such a way that we will locally always measure the speed of light to be constant.
"Second, the constancy of that quantity takes it completely out of the realm of physical arbitrariness or otherwise, and outright makes it a scalar in nature. CERN’s experiment since the 1960's: shoot a pion particle at almost the speed of light v and cause it to emit a photon particle while in travel and measure the travel speed of the photon. It is measured to be c, not c + v. If the aether as a fixed referential medium existed, it would have to be c + v. Thus, the mathematical logic of Reductio ad Absurdum invalidates the concept of an aether."
If we follow that logic, you would be implying that the speed of the sound waves coming from an airplane flying at a speed close to the speed of sound should also be almost twice the normal speed of sound.
"Further, you say that you envision the aether as made up of vortices."
No, not quite.
I envision the aether as a superfluid, like superfluid Helium, wherein quantized vortices exist.
Just like we have the air wherein tornado's can exist, but that in no way implies that the air is always and everywhere full of tornado's.
"How do we know that all these vortices have the same spin, in value and orientation?"
In rotating superfluids, it has been shown that quantized vortices form that all have the same circulation gamma, given by gamma = h/m, with h Planck's constant and m the mass of a fluid particle, often the Helium atom. See for example:
https://www.pnas.org/doi/10.1073/pnas.96.14.7760
This doesn't say anything about orientation, just that quantized vortices exist with each a circulation equal to gamma. For the aether, I took that gamma equal to k, the quantum circulation constant. From there, one can calculate the mass of an elemental aether particle along:
m = h/k,
which computes to about 7.3724e-51 [kg].
An interesting detail is that when one computes the Compton wavelength for a particle with that mass, one obtains the value of c, but in meters. And the corresponding Compton frequency computes to 1 Hz.
And then I envision that elemental particle to exist in the shape of a thin vortex ring and apply eq 1.8 from this publication to compute the radius of such a vortex ring as well as the radius of the hollow core around which the superfluid medium rotates:
https://www.researchgate.net/publication/232025984_Dynamics_of_thin_vortex_rings
Along with these values, we can also compute a velocity for such a ring vortex. However, that does not imply the velocity of such a ring vortex is always equal to that velocity. But if the velocity changes, the geometry of the ring vortex also changes. And it seems that the radius of the hollow core in particularly is an important parameter determining the speed. See the above paper for more details and formulas.
So, one could think of it as that we can compute certain parameters of such a hollow core vortex ring in a particular state, but that does not imply it will always have to be in that particular state.
"At a minimum this aether must be a dynamic system with inner physical variables."
Yes, the dynamics of the aether itself can be described in 3D using the vector Laplacian and the quantum circulation constant k as follows:
[a] = - k nabla^2 [v], and
[j] = - k nabla^2 [a],
with [v] the velocity field, [a] the acceleration field and [j] the jerk field, the time derivative of acceleration [a] and k the quantum circulation constant with a value equal to c^2 and a unit of measurement in [m^2/s]. And nabla^2 the vector Laplacian.
So, these are the equations for the medium itself, the superfluid aether.
"And then we are still left with the problem of how to qualify those variables in the absence of a universal metric, if we should follow the construct of a flat metric that is behind the aether notion."
I'm not exactly sure what you mean by a flat metric, but k is more like a surface metric since the m^2 (per second). So, it describes something different from linear light speed c, but it is obviously very closely related to light speed c, since a value equal to c^2.
Together with the vector Laplacian, it describes the fundamental dynamics of space-time itself in the shape of the well known Helmholtz decomposition, which essentially says that linear motion is fundamentally different from angular motion (rotation), yet they are intricately related:
https://en.wikipedia.org/wiki/Helmholtz_decomposition
"So the case is clearly made for Einstein’s Theories of Relativity, stressing the tensorial and curvilinear/curviplanar nature of the vacuum, and which I agree with you, are not perfect."
Nope, very far from that actually.
Because we can now describe the fundamental dynamics of space-time with the above equations, whereby we have corrected Maxwell's bug, we know that the resulting wave equations will be invariant to the Galilean transform and thus the whole reason for the Lorentz transform to exist is gone. And since Einstein's relativity theory is based upon the Lorentz transform, there goes the relativity theory as well.
And actually, it has been known for quite some time that currently accepted theories of gravity, which would be Einstein's curved space concept, cannot explain various astrophysical observations:
https://en.wikipedia.org/wiki/Dark_matter
"Various astrophysical observations – including gravitational effects which cannot be explained by currently accepted theories of gravity unless more matter is present than can be seen – imply dark matter's presence."
The bottom line of what's stated here, however, is:
"Various astrophysical observations – including gravitational effects [..] cannot be explained by currently accepted theories of gravity."
So, rather than concluding that some magic kind of matter must exist, it is my conclusion that the relativity theory is untenable.
"But I hope that Ron Hatch knows that without relativistic computations, we could not coordinate the humongous amount of planes flying the skies on a global scale and correctly compute all the time their destination arrival time to meet the exact expectation of the awaiting public. Let alone Mercury’s orbital anomaly… etc.
That's the interesting thing about Ron Hatch. That's the guy who was responsible for the algorithms that make the actual GPS system work!
In other words: in actual fact, the GPS system does NOT rely on relativistic computations, but rather puts serious question marks to the relativity theory, which is why Ron Hatch was one of the most outspoken critics of that theory.
"I confess I don't follow many of his derivations."
John Hodge Well, you're not the only one. :)
As stated before, one of the ideas behind his gravitational derivations originates at Feynman:
https://www.feynmanlectures.caltech.edu/II_10.html#Ch10-F8
"As illustrated in Fig. 10–8, a dielectric is always drawn from a region of weak field toward a region of stronger field. In fact, one can prove that for small objects the force is proportional to the gradient of the square of the electric field. Why does it depend on the square of the field? Because the induced polarization charges are proportional to the fields, and for given charges the forces are proportional to the field. However, as we have just indicated, there will be a net force only if the square of the field is changing from point to point. So the force is proportional to the gradient of the square of the field. The constant of proportionality involves, among other things, the dielectric constant of the object, and it also depends upon the size and shape of the object."
That's what he's trying to express in eq. 35:
https://vixra.org/pdf/1310.0237v1.pdf
It's anybodies guess where he got the "fundamental interaction coefficient of 3.146E-06 m2/kg".
The only number around this I could find elsewhere is the 6.673:
http://tuks.nl/wiki/index.php/Main/StoweCollectedPosts
"As for the gravitational constant G, it has its standard dimensions (m^3/Kg-sec^2) and is the the product of two other physical terms, the aether's momentum flux (6.72E+00 Kg/m-sec^2) and mass attenuation coefficient (3.1512E-06 m^2/kg), squared... -> 6.673E-11 m^3/Kg-sec^2"
And then the term "mass attenuation coefficient" can be found in:
http://www.tuks.nl/pdf/Reference_Material/Paul_Stowe/An%20Overview%20of%20the%20Concept%20of%20Attenuation%20[Pushing]%20Gravity%20(Paul%20Stowe)%20-%20Mountain%20Man_s%20News%20Archive.pdf
and:
http://www.tuks.nl/pdf/Reference_Material/Paul_Stowe/Mirror/le_sage.htm
So, it seems that he is somehow mixing Feyman's "proportional to the square of the electric field" with a pushing gravity model, which is certainly not out of the question, but in order to follow him one has to dive deep and dig up his older posts before things begin to make sense.
Arend Lammertink - I see, this Ron Hatch was one of the modern pontiffs of aether theory. I read on his page “one of the most decorated GPS scientists on the planet and he says that GPS doesn’t support relativity, but actually shows flaws in the theory.” No kidding! Maybe he could tell us how we manage to send so many probes successfully throughout the entire solar system for so long. Of course we do so, with no relativistic constructs deep within those computations. And he should have also sent a probe for us in the precessional area in the orbit of Mercury where the planet moves away from the star instead of following gravitational pull!
“My view, which I believe is also what Ron Hatches view was, is that it's not time itself that's relative, but rather that the ticking rate of the clock we use to measure time varies in such a way that we will locally always measure the speed of light to be constant.”
Atomic clocks!
“If we follow that logic, you would be implying that the speed of the sound waves coming from an airplane flying at a speed close to the speed of sound should also be almost twice the normal speed of sound.”
No, not a good comparison. Sound waves is not a particle in travel. Nothing physical is in longitudinal travel in a sound wave, like a photon particle is as ejected from a travel pion. Further sound wave needs the air medium for the perturbation they are to travel along and will never propagate thru the vacuum. So the comparison is unsuitable all around.
“This doesn't say anything about orientation, just that quantized vortices exist with each a circulation equal to gamma. For the aether, I took that gamma equal to k, the quantum circulation constant. From there, one can calculate the mass of an elemental aether particle along:
m = h/k”
So, as I understand, the ether is made up of special aether particles. If that is the case, then they should be precursors to h, the energy quantum that regiments real particles. Hence your equations should predict h and not use h to infer these particles’ properties such as their putative mass. That is what fundamental physics requires. In other words, the known value of h should drop out of your equations.
This is a primary requirement to give credence to the rest of the theory, i.e. the proposition that the ether particles are hollow core vortices, etc.
“So, rather than concluding that some magic kind of matter must exist, it is my conclusion that the relativity theory is untenable.”
We are in agreement there in that dark matter is a problem, a conjecture too often taken for truth. Dark matter is a construct put out there by modern-day relativists and not original theory. In fact, original relativity struggled with the cosmological constant, which is still, by my appreciation, an important unsolved area of physics.
Will conclude this by saying that the sound theoretical derivation from first principles of the host of known fundamental physical constants is only way, in my view, to advance fundamental physics on a verifiable and experimental track, thus authoritative and legitimate. In the history of physics this pursuit has not been stressed and made clear enough, and is often lost thru the inner workings of the many Theoretical constructions. Many in physics seek to come out with new Theories of the universe or “Everything” without understanding this paramount mandate. How many in physics understand the pursuit of QFT thru quantum electrodynamics to be the theoretical derivation of the properties of the electron particle, in particular its gyromagnetic ratio. Not many. Thank you for reflecting this somewhat in the wording of your question for the thread.
Best.
- Joseph
"I see, this Ron Hatch was one of the modern pontiffs of aether theory. I read on his page “one of the most decorated GPS scientists on the planet and he says that GPS doesn’t support relativity, but actually shows flaws in the theory.” No kidding!"
Joseph Jean-Claude
Nope, he's not kidding. There are flaws in the theory.
One of them is that they don't allow anything including information to propagate faster than light. In my paper, you can find like 20 references to experimental observations of superluminal anomalies:
Preprint Revision and integration of Maxwell’s and Navier-Stokes’ Equ...
Of those references, the work of Steffen Kühn is particularly interesting, because he shows that a general solution of Telegrapher's equations used in electrical engineering allows the effect he has been able to measure:
Article General Analytic Solution of the Telegrapher’s Equations and...
"Based on classical circuit theory, this article develops a general analytic solution of the telegrapher’s equations, in which the length of the cable is explicitly contained as a freely adjustable parameter. For this reason, the solution is also applicable to electrically short cables. Such a model has become indispensable because a few months ago, it was experimentally shown that voltage fluctuations in ordinary but electrically short copper lines move at signal velocities that are significantly higher than the speed of light in a vacuum. This finding contradicts the statements of the special theory of relativity but not, as is shown here, the fundamental principles of electrical engineering."
Preprint Electronic data transmission at three times the speed of lig...
Another flaw, if I remember correctly what I understood from Hatch, is the equivalence principle:
https://en.wikipedia.org/wiki/Equivalence_principle
"we ... assume the complete physical equivalence of a gravitational field and a corresponding acceleration of the reference system. — Einstein, 1907"
I don't remember the details, if you're interested you can watch the video shared or take a look at the papers I could find by him:
http://www.tuks.nl/pdf/Reference_Material/Ronald_Hatch/
"Maybe he could tell us how we manage to send so many probes successfully throughout the entire solar system for so long. Of course we do so, with no relativistic constructs deep within those computations."
https://www.guinnessworldrecords.com/world-records/66135-fastest-spacecraft-speed
"The fastest speed by a spacecraft is 163 km/s (586,800 km/h; 364,660 mph), which was achieved by the Parker Solar Probe at 21:25:24 UTC on 20 November 2021."
>>> v = 163000
>>> gamma = 1/sqrt( 1-(v*v/(c*c)) )
>>> gamma
1.0000001478100293
>>> gamma-1
1.4781002932728882e-07
>>>
It seems to me relativistic effects would not be too much of a concern, but I may be mistaken.
"And he should have also sent a probe for us in the precessional area in the orbit of Mercury where the planet moves away from the star instead of following gravitational pull!"
Let's just quote Hatch himself:
http://www.tuks.nl/pdf/Reference_Material/Ronald_Hatch/Hatch-Clock_Behavior_and_theSearch_for_an_Underlying_Mechanism_for_Relativistic_Phenomena_2002.pdf
"In an interesting study, Mansouri and Sexl [1] show that in most respects a Lorentz absolute ether theory with length contraction and clock slowing is equivalent to SRT. After reaching this conclusion, they conclude that SRT is preferable because it preserves the equivalence of all inertial frames. However, there are at least two reasons for seriously questioning this choice. First, the choice of absolute equivalence of all inertial frames requires the non-simultaneity of time while ether theories treat time as it is intuitively understood (i.e. clock rates change but a universal now still exists). But the major reason for choosing an ether theory over SRT is the choice of science over magic. Fundamentally, SRT is a magic theory. The speed of light is magically constant in all inertial frames—no mechanism is given. Having chosen this magic proposition, SRT then derives length contraction and clock (time) slowing as consequences. By contrast, the Modified Lorentz Ether Theory (MLET) models material particles as standing waves. Thus, it automatically predicts a length contraction with motion through the absolute frame due to the lower two-way speed of light relative to the moving particle. Clock slowing also follows because of the effectively lower two-way speed of light relative to the particle. With length contraction and clock slowing, all that is needed to get an apparent equivalence of all inertial frames is to bias the clocks such that the one-way speed of light appears to be isotropic in the moving frames. But most means of synchronizing clocks automatically supply the appropriate bias.
Thus, SRT has it backwards. It assumes the apparent equivalence of inertial frames is real and uses that result, together with the magic of a universal speed of light, to derive length contraction and clock slowing. On the other hand, the ether theories use the length contraction and clock slowing to show that there is an apparent equivalence of all inertial frames and an apparent common universal speed of light."
"“My view, which I believe is also what Ron Hatches view was, is that it's not time itself that's relative, but rather that the ticking rate of the clock we use to measure time varies in such a way that we will locally always measure the speed of light to be constant.”
Atomic clocks!"
The clocks used in the GPS system are atomic clocks:
https://www.nasa.gov/feature/jpl/what-is-an-atomic-clock
"Atomic clocks are used onboard GPS satellites that orbit the Earth, but even they must be sent updates two times per day to correct the clocks' natural drift. Those updates come from more stable atomic clocks on the ground that are large (often the size of a refrigerator) and not designed to survive the physical demands of going to space."
"“If we follow that logic, you would be implying that the speed of the sound waves coming from an airplane flying at a speed close to the speed of sound should also be almost twice the normal speed of sound.”
No, not a good comparison. Sound waves is not a particle in travel. Nothing physical is in longitudinal travel in a sound wave, like a photon particle is as ejected from a travel pion. Further sound wave needs the air medium for the perturbation they are to travel along and will never propagate thru the vacuum. So the comparison is unsuitable all around."
This is one of the fundamental points where we disagree. From my perspective, the vacuum is not a void but filled with a medium called aether. From that perspective, it is totally logic that EM waves propagate at c relative to the medium.
"So, as I understand, the ether is made up of special aether particles."
We don't know what the aether is made up of, at least not from my model, because I use continuum mechanics equations that have a lower limit with respect to their applicability. And this limit can be estimated for fluids and gasses by the Knudson number:
https://en.wikipedia.org/wiki/Knudsen_number
However, that assumes a medium that is made up out of some kind of particles and we don't know whether or not that would be true. All we know is that it appears to have a certain mass density, expressed in the model by rho in [kg/m^3], and that we can describe its dynamics by the equations already shared.
"If that is the case, then they should be precursors to h, the energy quantum that regiments real particles. Hence your equations should predict h and not use h to infer these particles’ properties such as their putative mass. That is what fundamental physics requires. In other words, the known value of h should drop out of your equations."
In order to derive h, we would have to revisit black body radiation. Thornhill offers some inspiration on that subject:
https://etherphysics.net/CKT1.pdf
While this is an interesting subject, it's still on my "maybe some day" todo list.
"This is a primary requirement to give credence to the rest of the theory, i.e. the proposition that the ether particles are hollow core vortices, etc."
The idea is not that ether particles are hollow core vortices, the idea is that real particles consist out of a number of vortices, which would come down to revisiting the vortex theory of the atom:
https://en.wikipedia.org/wiki/Vortex_theory_of_the_atom
From this perspective, the elemental particle I described in the topic posts is considered as the most elemental real particle, consisting of a single ring vortex in contrast to atoms that are assumed to consist of multiple, possibly knotted, vortices.
Even though this theory has been abandoned, our new theory offers an essential difference with respect to previous aether theories, because we can now define higher order equations and include the time derivative of force density, yank density:
[F] = - rho k nabla^2 [v] and,
[Y] = - rho k nabla^2 [a].
It is this higher order equation that appears to be important. Now since nabla^2 is defined by:
nabla^2 [F] = grad div [F] - curl curl [F],
We can define a scalar potential for that higher order field by:
T = - rho k div [a],
which results in a unit of measurement of [W/m^3] or power density, which I believe to represent temperature. There may be a constant involved to map this measure to the Kelvin, but other than that my working hypothesis is that this is what temperature is.
The reason I bring this up, is because of this paper by Donnely:
https://web.archive.org/web/20170809003854/https://sites.fas.harvard.edu/~phys191r/References/e1/donnelly2009.pdf
In this paper, he describes “second sound”, fluctuations of temperature, which according to him “has turned out to be an incredibly valuable tool in the study of quantum turbulence” and provides a condensed summary:
“After one of his discussions with London and inspired by the recently discovered effects, Tisza had the idea that the Bose-condensed fraction of helium II formed a superfluid that could pass through narrow tubes and thin films without dissipation. The uncondensed atoms, in contrast, constituted a normal fluid that was responsible for phenomena such as the damping of pendulums immersed in the fluid. That revolutionary idea demanded a “two-fluid” set of equations of motion and, among other things, predicted not only the existence of ordinary sound—that is, fluctuations in the density of the fluid—but also fluctuations in entropy or temperature, which were given the designation “second sound” by Russian physicist Lev Landau. By 1938 Tisza’s and London’s papers had at least qualitatively explained all the experimental observations available at the time: the viscosity paradox, frictionless film flow, and the thermo-mechanical effect.”
So, if the aether indeed behaves like such a superfluid, then it would be reasonable to assume such "second sound" waves would also be possible within the aether, illustrating that the dynamics of the medium are more complex as can be described with first order equations, such as Maxwell's and Navier-Stokes.
To sum this up: there is reason to believe that when we take these second order equations c.q. fields (Yank density) into account, we can come to new equations that could possibly revive that old vortex theory and lead to a satisfactory explanation of what particles and atoms are and how they behave.
Of course, this is speculation, but no one knows what we will discover when we start working with these second order equations.
"Will conclude this by saying that the sound theoretical derivation from first principles of the host of known fundamental physical constants is only way, in my view, to advance fundamental physics on a verifiable and experimental track, thus authoritative and legitimate. In the history of physics this pursuit has not been stressed and made clear enough, and is often lost thru the inner workings of the many Theoretical constructions. Many in physics seek to come out with new Theories of the universe or “Everything” without understanding this paramount mandate."
To me, the history of modern physics starts with Maxwell, since all of modern physics rests upon his equations, one way or the other. As stated, there's 20 references in my paper just around "anomalous" superluminal phenomena, which in the end as far as our theory is concerned only serve one purpose: to show that something is wrong with Maxwell's equations, while in fact that should have been clear in the blink of an eye because of the following question I asked before in this thread:
The electric field is defined as the gradient of the scalar potential Phi.
Now vector calculus says that the curl of the gradient of any continuously twice-differentiable scalar field is always the zero vector.
Yet Maxwell writes:
curl(E) = -dB/dt,
which is obviously something other than zero for time varying fields.
Since they can't be both correct, the question becomes:
Which one of the two is incorrect?
Maxwell or vector calculus?
Best regards and thanks for the debate,
Arend.,
“In an interesting study, Mansouri and Sexl [1] show that in most respects a Lorentz absolute ether theory with length contraction and clock slowing is equivalent to SRT. Fundamentally, SRT is a magic theory. The speed of light is magically constant in all inertial frames—no mechanism is given. Having chosen this magic proposition, SRT then derives length contraction and clock (time) slowing as consequences.”
I am sorry to say that these two do not understand 1) the value of experimental measurements, 2) the history of the academy of physics, 3) the history of measurements of the speed of light which started since 1678 with Ole Roemer (optical) to Edward Benett Rosa in 1907 (electrical) to Peter Woods et al in 1973 (laser). These laser measurements have been repeated a few times again in 1978. Since then we do not measure the speed of light for standardization purposes anymore. Those who have better technology than laser technology are pretty much left to themselves to do their own measurements in order to satisfy their theoretical needs.
The CERN experiment that I mentioned which sealed for us the constancy of the speed of light needs from any dissident a complete rejection of particle physics and the value of experimentation in physics to still find space for disagreement.
“In my paper, you can find like 20 references to experimental observations of superluminal anomalies: Of those references, the work of Steffen Kühn is particularly interesting, because he shows that a general solution of Telegrapher's equations used in electrical engineering allows the effect he has been able to measure”
The most credible effect that surrogates superluminal effect is the infamous Sagnac effect (don’t know if it is part of your repertory). It has long been found that in the ring light does not travel faster than c and that the effect is a mirage simply due to inadequate interpretation. Now, that electrical signals can travel faster than light is preposterous, for the simple reason that the finest electrical signals are of electromagnetic nature, a field whose fabric is predominantly made up of photonic radiation. And I am both an Electrical Engineer and a Laser engineer. With decades of design and patents too, I know a thing or two about both theories and technologies. This is the simple case of theorists creating their own observations in order to satisfy theory, in this case their aether theory.
The fact remains that the towering equation of your construction m = h/k does not make for fundamental physics because as I mentioned before your construction, for all its ambitions, should derive the value of h before it is allowed to peruse it. Second, I disagree with your view of Maxwell theory to the effect that “To me, the history of modern physics starts with Maxwell, since all of modern physics rests upon his equations, one way or the other.” No, Maxwell Theory, as a culmination of the classical view of physics, would never be able to come up with Quantum Physics which if anything is a triumph over Maxwell’s electromagnetism. It is precisely the demotion of continuous electromagnetic emission by the electron orbiting the nuclear proton in an H atom, that the quantum shell construct emerged to explain why the electron does not lose energy in the making keeping the atom a stable body. And that is why in my view a construction that ambitions to unite Maxwell to Quantum physics is very troubling.
Lastly, allow me to say that there is also value in being concise in your argumentation, perhaps more so than being expansive aka verbose. Although I understand the urge to bring all the waters to your mill!
Cordially.
"I am sorry to say that these two do not understand 1) the value of experimental measurements, 2) the history of the academy of physics, 3) the history of measurements of the speed of light which started since 1678 with Ole Roemer (optical) to Edward Benett Rosa in 1907 (electrical) to Peter Woods et al in 1973 (laser). These laser measurements have been repeated a few times again in 1978. Since then we do not measure the speed of light for standardization purposes anymore. Those who have better technology than laser technology are pretty much left to themselves to do their own measurements in order to satisfy their theoretical needs."
There's no point arguing about measurements of the speed of light, since there is no disagreement about the fact that one will always measure the same speed of light everywhere in the Universe in a local experiment, because length contraction and clock slowing cancel one another out, as in the quote from Hatch:
"On the other hand, the ether theories use the length contraction and clock slowing to show that there is an apparent equivalence of all inertial frames and an apparent common universal speed of light."
"The most credible effect that surrogates superluminal effect is the infamous Sagnac effect (don’t know if it is part of your repertory). It has long been found that in the ring light does not travel faster than c and that the effect is a mirage simply due to inadequate interpretation."
What happened to "the value of experimental measurements" you talked about just above?
This one was published in Nature:
http://tuks.nl/pdf/Reference_Material/Fast_Light/Wang%20et%20al%20-%20Gain-assisted%20superluminal%20light%20propagation.pdf
"Here we use gain-assisted linear anomalous dispersion to demonstrate superluminal light propagation in atomic caesium gas. The group velocity of a laser pulse in this region exceeds c and can even become negative, while the shape of the pulse is preserved."
"Now, that electrical signals can travel faster than light is preposterous, for the simple reason that the finest electrical signals are of electromagnetic nature, a field whose fabric is predominantly made up of photonic radiation."
I agree that electromagnetic signals cannot propagate faster than light.
However, besides the normal electromagnetic phenomena we are used to, there is also Tesla's superluminal dielectric wave that does not have a magnetic component and is therefore not electromagnetic, but pure dielectric.
And that is precisely the phenomenon not predicted by Maxwell, because of the entanglement of Faraday's law with the fundamental medium model in violation of elemental vector math, as pointed out before:
The electric field is defined as the gradient of the scalar potential Phi.
Now vector calculus says that the curl of the gradient of any continuously twice-differentiable scalar field is always the zero vector.
Yet Maxwell writes:
curl(E) = -dB/dt,
which is obviously something other than zero for time varying fields and is therefore incorrect.
It is this entanglement of Faraday's law with the fundamental medium model which broke the fundamental symmetry between the angular and linear components in the vector Laplacian and that is why Maxwell can only describe electromagnetic waves but not Tesla's longitudinal superluminal wave.
"And I am both an Electrical Engineer and a Laser engineer. With decades of design and patents too, I know a thing or two about both theories and technologies. This is the simple case of theorists creating their own observations in order to satisfy theory, in this case their aether theory."
In that case, it would be nice if you would be kind enough to explain to me what Steffen Kühn did wrong in this particular measurement and what he did to "create his own observations":
Preprint Electronic data transmission at three times the speed of lig...
"The fact remains that the towering equation of your construction m = h/k does not make for fundamental physics because as I mentioned before your construction, for all its ambitions, should derive the value of h before it is allowed to peruse it."
That's not the towering equation of my theory, the construction that is the topic of this thread is merely an application thereof.
The towering equation of my theory would be the fundamental relation between space and time which requires only one constant: k, the quantum circulation constant, which allows us to define the time derivative of any given vector field in three dimensional space time as follows:
d[F]/dt = - k nabla^2 [F].
And the dimensions fit:
[/s] = [m^2/s] [/m^2].
Only part I'm not 100% sure of is the minus sign.
So, this is the fundamental equation that will one day be recognized as one of the biggest scientific breakthroughs of the 21st century. Just a matter of time, because there is simply no argument to be made against such a simple and straightforward application of the vector Laplacian. None!
"No, Maxwell Theory, as a culmination of the classical view of physics, would never be able to come up with Quantum Physics which if anything is a triumph over Maxwell’s electromagnetism."
Nope, the culmination of the classical view of physics is in the correct application of the vector Laplacian at a higher abstraction level than both Maxwell as well as Navier-Stokes. No point in repeating the equations again.
"It is precisely the demotion of continuous electromagnetic emission by the electron orbiting the nuclear proton in an H atom, that the quantum shell construct emerged to explain why the electron does not lose energy in the making keeping the atom a stable body."
One only has to consider the 21 cm Hydrogen line to understand that the idea of a single electron supposedly randomly deciding to change its orbit around a proton just a tiny bit for no reason supposedly resulting in the magical emission of a photon with a wavelength of no less than 21 cm will not be able to withstand the test of time.
Oops, forgot to tag Joseph Jean-Claude in the above reply.
And let me add that I really appreciate the debate.
Debating like this always helps me to formulate my arguments better and better.
Kind regards,
Arend.
Arend Lammertink - Thank you for remaining civil even in disagreement. I would also hope that the civility extends to honesty, because you are clearly filibustering me with a deluge of text in the hope of prevailing in the exchange. That does not speak well of the theory, because it should be able to stand on its own. And if you are filibustering, then there is no debate, which you say you appreciate. A debate implies being on point on answering or addressing a question or addressing a specific issue without extending or digressing into a thousand other things. For me this is not about prevailing, but assessing the value of a Theory and the claims that you put before us, against my better judgement of not paying attention to ether theories.
1.- You presented your theory under the towering equation m = h / k, which describes the seminal fabric of your ether medium made up of ether particles, m being their mass. By the way, just like the beginning of Quantum Field Theory with the description of momentum space p = ћ k (their meaning being different than yours), by no coincidence. After the objections I raised, now you tell us that equation is not primal in the theory, but another d[F]/dt = - k nabla^2 [F]. How can a derivative ever be primordial in anything? The antiderivative would have to be. You are yourself here making the case that your theory is NOT fundamental, even less for the claim of derivation of any fundamental physical constant. No further comments.
2.- In that case, it would be nice if you would be kind enough to explain to me what Steffen Kühn did wrong in this particular measurement and what he did to "create his own observations." I read the abstract of the article that you pointed me to. Sorry to say that this guy does not know much if anything about what he is talking about. Is he seriously telling the rest of the engineering world that an electrical signal going thru an Op Amp in an IC is going to reach any speed close to c, let alone be superluminal. With all due respect to him, ludicrous! Deserves no attention!
3.- FYI, in laser fiber-optic technology, what we are fighting to do is avoid at all cost conversion of signals to electrical signals within the optical path because this is where you lose most of the speed, or bandwidth capacity. That is what motivated the emergence of DWDM in optical transmission (Dense Wavelength Division Multiplexing) in an attempt to keep the signal path all optical all across.
4.- Also FYI, group velocity in optical engineering is not about intent transmission of a particular signal or payload but correlation and merger in the spectrum of a signal packet that creates the impression of a moving artifact. Nothing physical is moving longitudinally at the speed of light or above in there. Also know that when it comes to optical signals in superposition, group velocity for all photons is equal to the velocity of one photon.
I am going to conclude this exchange by granting you your wish that “your fundamental equation will one day be recognized as one of the biggest scientific breakthroughs of the 21st century.”
Best of luck!
My opinion is that the constancy of the speed of light is ensured by the constancy (constant) of the transmission time of the excited state of the discrete space (in the form of a photon) to the neighboring discrete. The velocity of the electron that emitted this photon, the velocity of the atom around which the electron rotates, and the velocity of the body containing this atom cannot affect the constant in any way. Also, there is no decrease in the length of the body, and time dilation in the gravitational field. Only the projection of this length on our space decreases, and the path of movement of the body in the gravitational funnel lengthens.
Here is the full text of my work - "Mass as the geometry of space".
The fine structure constant is the square ratio of electron charge and the
Planck charge multiplied with the geometrical factor 1/(4π).
It seems that the rather strange, but, nonetheless, remarkably too vivid, discussion in this thread is stopped for a while; so it is worthwhile to remained here that what really are the elementary electric charge and the fine structure constant is rigorously scientifically explained in the Shevchenko-Tokarevsky’s 2007 initial model of Gravity and Electric Forces in
https://www.researchgate.net/publication/365437307_The_informational_model_-_Gravity_and_Electric_Forces,
- more see the link; and here only a comment to some really rational post
“…The fine structure constant is the square ratio of electron charge and the Planck charge multiplied with the geometrical factor 1/(4π).….”
- yeah, that is so. However “Planck charge” really isn’t a fundamental physical unity, in contrast to the really fundamental unities: Planck elementary action/elementary angular momentum , ћ, Planck length, lP, and Planck time, tP,
- which are the main, utmost fundamental and utmost universal, parameters of the Matter’s ultimate base - the Matter’s “aether” – the [5]4D dense lattice of [5]4D binary reversible fundamental logical elements [FLE], which is placed in the corresponding utmost universal Matter’s fundamentally absolute, fundamentally flat, and fundamentally “Cartesian”, [5]4D spacetime with metrics (cτ,X,Y,Z,ct);
- while the particles’ parameter “elementary electric charge” isn’t universal, it is quite specific actualization of specific fundamental Nature Electric force, and exists in Matter only as “e” [though possibly also as 1/3e and 2/3 e in quarks], while “Planck charge” really doesn’t exist.
Cheers
Sergey Shevchenko If you declare the elementary angular momentum, ћ, as fundamental, then the same applies to the Planck-charge because of its definition
[math]q_{pl}=\sqrt{\hbar/Z_{0}}[/math] resp. qpl=√(ℏ/Z0)
Gerd Pommerenke
“…Sergey Shevchenko If you declare the elementary angular momentum, ћ, as fundamental, then the same applies to the Planck-charge because of its definition
[math]q_{pl}=\sqrt{\hbar/Z_{0}}[/math] resp. qpl=√(ℏ/Z0)…”
- firstly, the threads soft on the RG don’t support LaTeX, however it allows to write simple equations, see the tools below your writing post. Besides, the “Planck charge”, qp, is qp=e/√α, where e=(4πε0 αħlP/tP)1/2, and so really isn’t just a “Planck unit”, despite that is do formed by using the really fundamental Planck units – the elementary physical/angular momentum, ћ, Planck length, lP, and Planck time, tP, which are indeed ultimately fundamental and ultimately universal units, since are parameters of the ultimate base of Matter – the binary reversible fundamental logical elements [FLE]:
- lP is the FLE “size”, tP isthe FLE binary flip time interval, ħ is the angular momentum that flipping FLE has.
Elementary electric charge, e, is, of course, a fundamental constant in Matter, however it relates only to one specific fundamental Nature Electric force, the strength of which is determined by the really more fundamental fine structure constant α, which has no direct relation to FLE.
Correspondingly really “Planck charge”, which is as derived by division of e on √α, really has practically zero physical sense.
What really are the physical senses of fine structure constant and elementary electric charge see the SS post above, including the linked paper, of course.
Cheers
Sergey Shevchenko „…Electric force, the strength of which is determined by the really more fundamental fine structure constant α, which has no direct relation to FLE.“ … unfortunately way wrong this way round …
Electron charge e and speed of light c are most fundamental - and surely not Fine Structure constant alpha. This can be easily seen when following derivation of native iSpace-IQ unit system (see my RG papers).
Planck constant h is - while being deep concept wise - not really fundamental either, as h directly depends on electron charge e and a further integer factor of 6 (2*Pi3*1).
Its Dirac who already stated: „… one of Planck constant, electron charge or speed of light can not be fundamental.“ - and he was right. Planck constant h is not fundamental, and even c has identical metric value as e in iSpace-IQ unit system. Simple to see, yet tricky to grasp (get).
Sergey Shevchenko and Christian G. Wolf
Preprint The Electron and Weak Points of the Metric System
orPreprint The Metric Universe
Gerd Pommerenke Big misunderstanding of what I tried to express, Gerd - I would *never* impose any thinking of models just because it happens to be having an extremly well working acutually predicting one fully in sync with past and current experimental results. Please take the time to have a look, sleep a night over and rethink. Then you see for yourself. If not, also fully ok. Different opinons and models and theories are good (ok, more is not always better, but here imho it is).
For the formatting in Latex - or better not on RG my point is Sergey is right. Once you use proper technology (like Apple iPhone with RG app or RG app on Mac (or even PC, likely - i never tried) this is imho near as simple as in Mathematica - and the it is out of question implemented as a 100/100.
Christian G. Wolf , The criticizm concerning the thinking ban was not directed to you. To the LaTeX-problem I'm not happy with the solution. Where is the "proper technology" to write simple formulas inline in text e.g. A⋅ψ¨ + AB⋅ψ˙ +( A⋅C−D)⋅ψ=0? But if there is a simple square root, we are screwed. We are discouraged by Quora which supports inline LaTeX formulas for a long time. See
https://alltaglichkeitdermathematik.quora.com/Wie-sind-die-Klammerregeln-Assoziativit%C3%A4t-beim-Summenzeichen-Kann-man-statt-math-n-2-sum_-i-1-n-2i-n-math-a
But my main problem is, that I lost the link to the website whith LaTeX-input left and "translation" into UNICODE-text right. Any idea?
Gerd Pommerenke : Gerd, the simple technology you seek for to format any arbitrary complex euqation outside TeX in JavaScript websites is MathJax - I am IT expert, ok?
https://www.mathjax.org/
This is also the basis in Mathematica a (felt) million other libraries even MS. And the best - MathJax is able to *live* import/export TeX and more, you write Latex, we get proper readable perfect formatted math equations.
- using some sophisticated soft at the RG discussions is practically unnecessary, the posts aren’t scientific papers; while to write corresponding practically always simple, equations it is enough to use the RG tools on the bottom of a writing post window. If somebody wants to point something more complex, it is enough to attach corresponding LaTeX PDF file using the bottom tool “add files”.
“…If Planck constant ℏ would not be fundamental, then you are right. But I think, the Natural constants, if constant or not, are not fussed about being declared as "fundamental" or not by whatever person or board. They are what they are and no more. And it's appropriate to compare e with qpl ….”
- again – see the SS posts on page 8 – there exist only 3 really utmost fundamental Planck units – the elementary physical action/angular momentum, ћ, Planck length, lP, and Planck time, tP, which are indeed ultimately fundamental and ultimately universal units, since are parameters of the ultimate base of Matter – the binary reversible fundamental logical elements [FLE];
- which [FLE] compose the ultimately fundamental [4+4=1]D FLE-lattice that is placed in the corresponding Matter’s fundamentally absolute, fundamentally flat, and fundamentally “Cartesian”, [4+4+1]4D spacetime with metrics (cτ,X,Y,Z,g,e,s,ct) [though it is possible that there exist additional dimension “w”, that relates to Weak Force],
- and everything what exists and happens in Matter is/are some disturbances in the lattice.
Besides there exist also fundamental, but lesser universal specific only at action corresponding to Gravity, Weak, Electric, and Strong Forces constants, which determine the Forces’ strengths; the strength of Electric Force is determined by the really fundamental fine structure constant, α;
- and, universally to all Forces by the ultimately fundamental Planck units above. Including the elementary electric charge, e, really is a fundamental constant only because of action of the α –constant and ultimately fundamental Planck units/constants above,
- e=(4πε0αħc)1/2, where c is the speed of light, which is fundamental in the mainstream physics constant, which is really fundamental because of c=lP/tP.
Christian G. Wolf
“….Sergey Shevchenko [SS quote]„…Electric force, the strength of which is determined by the really more fundamental fine structure constant α, which has no direct relation to FLE.“ [end quote]… unfortunately way wrong this way round …
Electron charge e and speed of light c are most fundamental - and surely not Fine Structure constant alpha….”
- about what are electron charge e and speed of light c – see above; so, say, that
“ Its Dirac who already stated: „… one of Planck constant, electron charge or speed of light can not be fundamental.“ - and he was right. ……”
- is true, Dirac was, of course, right in this trivial case – that in the equation α = e2/4πε0ħc , since α is dimensionless constant, so three other constants aren’t all independent on each other.
Again, more about what is Electric Force, electric charge, fine structure constant, why α = e2/4πε0ħc, etc., see the Shevchenko-Tokarevsky’s 2007 initial model of Gravity and Electric Forces in
https://www.researchgate.net/publication/365437307_The_informational_model_-_Gravity_and_Electric_Forces,
Cheers
Sergey Shevchenko Sergey - not sure of course - but from what you state you not even had a look on iSpace model, right? Let alone iSpace-IQ unit system - proving what i said:
Preprint iSpace - Quantization of Time in iSpace-IQ Unit-System by 1/...
Conference Paper iSpace - Exact Symbolic Equations for Important Physical Con...
Article New novel physical constants metric and fine structure const...
These three papers, last one peer-reviewed are must have, with first one indeed line byline (30min. about only) to fully grasp and understand why new iSpace-IQ units are unavoidingly a massive game changer.
Once you (anyone) really read we can discuss (consequences, what follows from mandatory, possible oversights and in turn the relations to other models.
RG project as long as it lasts:
https://www.researchgate.net/project/iSpace-Exact-quantum-geometric-value-of-primary-constants-of-nature
Dear Christian ,
“…Sergey - not sure of course - but from what you state you not even had a look on iSpace model, right? Let alone iSpace-IQ unit system - proving what i said:
Preprint iSpace - Quantization of Time in iSpace-IQ Unit-System by 1/6961 iSpace-Second.….”, etc.
- sorry, but I comment as a rule only mainstream physics and don’t comment alternative approaches; here so only note that Time – and any Space dimension - fundamentally cannot be quantized.
That is another thing, that in the fundamentally continuous and infinite Matter’s fundamentally absolute, fundamentally flat, and fundamentally “Cartesian”, [5]4D spacetime with metrics (cτ,X,Y,Z,ct) [note that only utmost universal “kinematical” dimensions are pointed, really the metrics is at least [4+4+1]4D] the ultimate base of Matter – the Matter’s aether – the (universally [5]4D) dense lattice of [5]4D binary reversible fundamental logical elements [FLE] is placed; the FLE “sizes” in every of at least 9 dimensions above are equal to Planck length,
- and, since everything in Matter exists and happens only as some disturbances in the lattice, everything happens as being “quantized” in the space and time, since happens as “FLE-by-FLE” steps.
Cheers
An essential addition to the SS post March 7 above: since the fundamental Nature Gravity and Electric forces [see the Shevchenko-Tokarevsky’s 2007 initial models of these Forces in https://www.researchgate.net/publication/365437307_The_informational_model_-_Gravity_and_Electric_Forces] ,
- and the fundamental Nature Nuclear force that acts between nucleons in atomic nuclei [see the SS&VT 2023 initial model of this Force in the paper “The Informational Model — Nuclear Force” in
https://www.researchgate.net/publication/369357747_The_informational_model_-Nuclear_Force],
- all act in accordance with one general model by the same scheme; and while the Electric Force-marked FLEs [what relates to electric charge of a particle] occupy the part, Ne, of the whole FLEs chain logical length in particles algorithms, N0, Ne=α1/2N0, i.e. ~8% of N0 [that is experimentally observable part, really Electric-marked FLEs occupy in proton the part 5/3α1/2)N0], αis the fine structure constant,
- at that the Nuclear Force-marked FLEs occupy the part of the rest FLEs in N0, in proton (1-5/3α1/2)N0, in neutron (1-5/3α1/2)N0,
- so the fine structure constants essentially determines also the Nuclear Force strength in nuclei.
Cheers
Sergei Shevchenko and others Сергею Шевченко и другим
The task that has developed and posed within the given framework must be solved within these frameworks. Only then can the framework for the successful solution of new problems that have arisen be expanded. Failure to comply with this rule, the researcher gets the opportunity to wander endlessly in the vast space of apparent possibilities. These opportunities are almost always ineffectual, although sometimes it seems that the result is already almost achieved. The Law of Gravity is formulated within the framework of Classical mechanics. But if the task set (understanding the nature of gravity) is not solved, success should be achieved in this direction. The solution lies in understanding the essence of the gravitational constant G, which has a classical dimension. Until the researcher has an understanding of the nature of gravity within the framework of Classical Mechanics, his efforts will be in vain. Involving electrical, magnetic, quantum and other components, a clear end result cannot be obtained. Apart from the salary, of course. Bernhard Riemann would have solved the problem (he was on the right track) 200 years ago if he had not attracted Zoroastrianism as an assistant. Within the framework of the classics, the problem was later solved and published. The results open up a wide range of possibilities. Take an interest, if there is a desire, I do not want to suggest. These searches may also be interesting.
Задача, сложившаяся и поставленная в заданных рамках, должна быть и решена в этих рамках. Лишь после этого можно расширять рамки для успешного решения новых возникших проблем. Не выполнив этого правила, исследователь получает возможность бесконечно блуждать в необъятном пространстве кажущихся возможностей. Эти возможности почти всегда безрезультатны, хотя порой кажется, что результат уже почти достигнут.
Закон Гравитации сформулирован в рамках Классической механики. Но если поставленная задача (понимание природы гравитации) не решена, следует добиться успеха в этом направлении. Решение находится в понимании сути гравитационной постоянной G, имеющей классическую размерность. Пока не возникнет у исследователя понимание природы тяготения в рамках Классической Механики, его усилия будут тщетны. Привлекая электрическую, магнитную, квантовую и прочие составляющие, внятного конечного результата не получить. Кроме зарплаты, конечно. Бернгард Риман решил бы задачу (он был на верном пути) еще 200 лет назад, если бы не привлек в помощники зороастризм. В рамках классики задача позже была решена и опубликована. Результаты раскрывают широчайшие возможности. Поинтересуйтесь, если будет желание, подсказывать не хочу. Эти поиски тоже могут оказаться интересными.
Vladimir A. Lebedev Vladimir, forgive me your'e very wrong and here's why:
"Probleme kann man niemals mit derselben Denkweise lösen, durch die sie entstanden sind." (Albert Einstein)
Christian G. Wolf
You do not see the difference between the quality of thinking (Einstein's aphorism about this) and the given conditions of the problem (I wrote about this). Besides, Einstein is not a very good philosopher, otherwise he would not have supported the special theory of relativity. This is his difference from Poincaré.
“…The Law of Gravity is formulated within the framework of Classical mechanics. But if the task set (understanding the nature of gravity) is not solved, success should be achieved in this direction. The solution lies in understanding the essence of the gravitational constant G, which has a classical dimension. Until the researcher has an understanding of the nature of gravity within the framework of Classical Mechanics, his efforts will be in vain. Involving electrical, magnetic, quantum and other components, a clear end result cannot be obtained.….” , etc.
- Gravity Force is some fundamental Nature force, and so real explanation of what Gravity Force is can be only on some fundamental level.What is principally impossible in mainstream physics, for rational consideration of any problem on this level it is evidently necessary at that to undefrtsand – what are the fundamental phenomena/notions first of all in this case “Matter”, “Consciousness”, “Space”, “Time”, “Energy”, “Information”,
- which in the mainstream are fundamentally completely transcendent/uncertain/irrational, and so, say, everything in Matter, i.e. fundamental Forces, “particles”, “fields”, etc., are quite logically inevitably completely transcendent/uncertain/irrational as well,
- so all what meaistream physics could/can do is the studying of Matter experimentally, and further, if in some material systems some logical links in/between material objects/events/processes are experimentally discovered, the formulation of some mathematical constructions that are based on the - really completely ad hoc postulated as Matter’s laws -interpretations of the experimental data “physical theories” aimed at only fitting of theories with experiments.
Corerspondingly all – “classical”, “relativities”, and “QM/QFT”, theories are based very essentially not only on ad hoc, but also fundamentally wrong, postulates; as that, say, really non-existent trasformations of space/time/spacetime that are postulated in the SR/GR, postulated “virtual” fields, particles, and Forces’ mediators in the Forces’ theories, etc.
Again, really any really fundamental problem can be rationally scientifically considered/explored only provided that the fundamentatl phenomena/notions above are really scieltifically defined, what is possible, and is done, onlyin framework of the 2007 Shevchenko-Tokarevsky’s “The Information as Absolute” conception, recent version of the basic paper see
https://www.researchgate.net/publication/363645560_The_Information_as_Absolute_-_2022_ed
- and in framework of the SS&VT corresponding informational physical model
https://www.researchgate.net/publication/354418793_The_Informational_Conception_and_the_Base_of_Physicsthat is based on the conception,
- including in the model on the fundamental level the problems of what are the fundamental Nature Gravity, Electric, and Nuclear forces, including what, are, and why they are as they are the Forces strengths constanst “G”, “e” , “α”, gN, are practically for sure solved; the links to the papers see SS post above on page 9.
That are only initial – but basic – models of the Forces, however only basing on these models the more complicated Forces’ theories can be developed, which really scientifically will describe what happens in Matter on the lesser fundamental than the ultimately fundamental Planck scale levels/scales.
Cheers
Sergey Shevchenko
Very wordy and has nothing to do with what was said about the framework of classical physics (or mechanics). It makes no sense to argue, because this problem has been solved within the given framework.
B. Riemann began to solve it with the right approach, but he did not finish the solution. Reason: he went beyond the classics. But now this classical problem has been solved and published. Do not engage in idle talk, but find a solution (it is here in the ResGate) or "open" the gravitational constant, even a student can do this if he has something in his head.
Since the thread
https://www.researchgate.net/post/Is_the_electron_mass_strictly_of_electromagnetic_electrodynamic_origin/349 , where this thread question is also answered in SS posts, last time is too vividly spammed by too evident trash, it looks as worthwhile to remind here that the questions “what is physical origin of the fine structure constant and the definition of elemental charge? are answered in this thread already - in the Shevchenko-Tokarevsky’s 2007 initial model of Gravity and Electric Forces in
https://www.researchgate.net/publication/366536729_The_informational_model_-Gravity_and_Electric_Forces_Version_2
[or see the section 6. “Mediation of the fundamental forces in complex systems” in https://www.researchgate.net/publication/354418793_The_Informational_Conception_and_the_Base_of_Physics ], if briefly:
- particles are [as that everything in Matter is ] some specific disturbances in the ultimate base of Matter – primary elementary logical structures – (at least) [4+4+1]4D binary reversible fundamental logical elements [FLE], which compose the (at least) [4+4+1]4D dense lattice, which is placed in the Matter’s fundamentally absolute, fundamentally flat, and fundamentally “Cartesian”, (at least) [4+4+1]4D spacetime with metrics (at least) (cτ,X,Y,Z, g,w,e,s,ct), FLE “size” and “FLE binary flip time interval” are equal to Planck length, lP and Planck time, tP;
- as [the disturbances “particles”] are some close-loop algorithms that cyclically always constantly run with frequency ω= E/ћ; if an having rest mass particle is at rest in the absolute 3DXYZ space ω=m0c2/ћ; the “spatial length” of the algorithm is equal c/ω=ћ/m0c=λ= the particle’s Compton length, “logical length”, i.e. number of FLEs in the algorithm, N0, is equal to N0=λ/lP.
“Charge of a fundamental Nature force” is written in corresponding particle’s algorithm in parts of N0, NF, where in the part NF] the FLEs have corresponding Force “mark”. When an algorithm runs, the FLEs in NFs cause propagating in the lattice specific disturbances, FLEs in which are also marked by the Forces – the Forces; mediators, which are observed on macroscale as the “Forces’ fields”. Besides the strength of a Force – and so “charge” is can be determined also by the frequency ω.
At that
- every particle’s algorithm has only fixed 1 G-marked FLE, NG=1, so gravitational charge “gravitational mass” is proportional to ω, say, proton’s Gravity is ~2000 times larger than electron’s Gravity;
- in electrically charged particles NE is relative, and so [practically] all particles have the same charges independently on ω, while NE=√αN0, where α is just the fine structure constant, and so α = e2/4πε0ħc ;
- the part (N0-NE) in nucleons algorithms in a nucleus is marked by Nuclear Force, and so the strength of this Force is also relative, and is in ~ (1-√α}/√α ~ 10 times larger than Electric Force strength.
More see the papers that are linked above, to read SS posts in https://www.researchgate.net/post/No9_Is_the_spin_of_an_electron_really_spin/3 and
https://www.researchgate.net/post/H_denotes_the_constant_ratio_E_f_Ehf_is_it_possible_that_h_has_an_equation_both_without_E_and_f/2
- it is useful as well.
Cheers
Electric charge is a function of the moment of mass.
e = sqrt(α*10^7*mp*lp) = sqrt(α*10^7*me*λ) = sqrt(α*10^7*mpr*λpr) = sqrt(α*10^7*mn*λn) = sqrt (α*10^7*mτ*λτ) = sqrt (α*10^7*mµ*λµ),
where α is the fine structure constant; mp, lp is the Planck mass and length; me, mpr, mn, mτ, mµ is the mass of the electron, proton, neutron, tau, muon; λ, λpr, λn, λτ, λµ is the Compton wavelength (over 2pi) of an electron, proton, tau, muon.
See article:
ELECTRIC CHARGE AS A FUNCTION OF THE MOMENT OF MASS. GRAVITATIONAL FORM OF COULOMB'S LAW.
https://hal.science/hal-01374611
Elemental charges of [sub-]particles are NOT constant ................ They depend upon their mereology ,,,,,,,,,,,,
From quantum gravity view point
Ratio of the quantum gravitational energy density to the energy density of relic gravitons.