From what I understand the Standard Model (gauge) symmetry group U(1)xSU(2)LxSU(3) was originally pieced together from experimental observations.
What theories do we have to try and explain the origin of this symmetry group? If there are no such theories, how then should we go about trying to find a theory to explain the Standard Model?
Two possibilities I can think of are:
1. Matter might be topological in nature and so perhaps the Standard Model symmetries emerge from the underlying topology of particles;
2. The symmetries of the Standard Model somehow drop out from space-time symmetries.
With regards to the second possibility, It seems that 'no-go' theorems (Coleman Mandula theorem) say that space-time and internal symmetries can only be combined in a trivial way (direct product). Are there (sensible) ways around these theorems?
I understand Niels' perplexities: he proposed a physical question concerning internal symmetries of the Standard Model. It's today the custom to lead each discussion of physics to the origin of the universe that is a metaphysical or religious question and non physical, like the fate of the universe. I have nothing on the contrary with respect to these discussions, but it must be clear that those questions don't concern physics but they concern metaphysics or religion. It is suitable a distinction: physics doesn't have the object to reach the absolute truth, but it has the object to understand in possible limits the behavior of nature with respect to observers..
Niels, thank you for the question. The simple answer would be: YES. But we do not know about all of the fundamental origins of those symmeteries. For example, the quark symmetries explain the hadron spectrum perfectly, but what are quarks really? So the deeper questions are then about What is the "stuff" particles are really made of? Can space and time exist without matter? Topological electromagnetism has been my bet since 1991... My papers explain this a bit better and can be downloaded from Researchgate, of course!
Dear Stam, Mach would strongly disagree with you, and so would Herman Weyl. To them space time is note merely the stage on which the particles act. There is an intimate relation between the two, in fact one may see the totality of all interactions in the universe as the basis for space and time.
Internal symmetries represent certainly the most controversial aspect in the Standard Model. They derive from conservation laws ad hoc that are valid only for elementary particles and not for general systems of physics. They raise many problems like for example the question of decay of muon and of mesons. The two solutions that have been proposed are commendable but they don't seem suitable to solve the question, in particular with regard to symmetries of space-time. Also the underlying topology of particles doesn't seem to be able to solve the question that is basically physical and not geometrical-mathematical. We have to accept the fact that physics isn't a branch of mathematics even if mathematics is able often to provide physics with important and useful models. Therefore a suitable topological model together with a physical solution could be able to give a more satisfactory answer to question. The Non-Standard Model has identified in the concept of electrodynamic mass the physical solution for the behaviour of elementary particles. The "Theorem of Spin and Charge" and the "Principle of Decay" represent then together with the concept of electrodynamic mass other important aspects of the Non-Standard Model. In this view it would be interesting to research prospective topological models of elementary particles, considering space and time represent the stage where physical events happen..
I observe the Non-Standard Model raises some interest and we can find the way for proceeding along this direction. I would want to submit the "Theorem of Spin and Charge" to your attention. In the Standard Model characteristic values of spin are: 1 (for photons), 0 (for mesons) and 1/2 (for fermions), neglecting prospective gravitons. The Standard Model is unable to establish a relation between charge and spin and is unable to given a convincing physical explanation for the value 1 to photons and above all for the value 0 to mesons. Those values were chosen in order to reconcile the arithmetic accounts. The Non-Standard Model proves instead the existence of a mathematical relation between charge and spin that for free elementary particles is: qs = ћQ/2e where Q is particle's charge and e is electron's charge.
As per this relation photons and all electromagnetic particles with zero electric charge have spin 0, fermions have spin +1/2 if charge is +1 and spin -1/2 if charge is -1. It is manifest that in NSM mesons behave like other fermions and therefore they have spin +1/2 or -1/2. In the Non-Standard Model meson's anomaly is exceeded but above all a valid physical explanation and a valid correlation is given to values of charge and spin for particles.
Thank you Martin and Daniele for your answers
@Martin
I think you are right, the answer requires us to consider both what particles are made of and what they look like (their topology). Likewise I think topological electromagnetism may offer some answers. It seems capable of explaining the quantisation and value of charge and as we know it is also possible to understand electromagnetic force in terms of EM fields only.
Various other topological models also provide some promising results such as the works of Jehle and Finkelstein and more recently the Helon model of Bilson-Thompson.
@Daniele
I am not sure what you mean exactly when you say that the underlying topology of particles doesn't seem to be able to solve the question that is basically physical and not geometrical-mathematical.
Investigating prospective topological models of elementary particles would indeed be interesting. One interesting question that arises is how we should think about space and time. It seems likely that we would need to go beyond Minkowski spacetime. The topological Helon model has been embedded within spin networks and loop quantum gravity. This raises the possibility that particles (and their interactions) may be emergent from the underlying spacetime.
Dear Niels,
I want to mean I am talking about physics that fundamentally is understanding of reality in which mathematical, including topological, models can be very useful. In the Theory of Reference Frames (TR) I have criticized whether the Minkowski spacetime or the Einstein spacetime, both based on imaginary metrics of spacetime. In TR vacuum (empty space) is the primordial reality in which time has no meaning. Only the appearance of mass in the physical vacuum generated the physical time and this process is described in TR by relation dt'=mdt/m'. In TR physical processes happen in the domain space-mass-time. It seems to me that in postmodern physics there is much confusion on the fundamental physics.
Dear Daniele,
Thank you for your reply. Yes, indeed there seems to be confusion on the fundamental physics. I myself am often confused :)
I find it interesting that you mention that the appropriate domain for processes is space-mass-time. This reminds me of the 2006 work of Das and Kong. There they deform the Poincare algebra to a higher dimensional Lie-algebra. In one such deformation, the extra dimension that appears has units of proper time over mass.
Yes Niels,
I am glad that other physicists have viewpoints like me. I would want nevertheless to remind that I move along the direction of simplification of mathematical models and not into reverse.
The theoretical origin for the standard models in cosmology and particle physics derive from the ontology of the Big bang creation event, that is the brane epoch preceding not following the Quantum big bang of the 'singularity'. There exists an inherent supersymmetry in this 'inflationary epoch', which generates a quantum geometry from a unification of what is a sub energy plenum preceding the classical thermodynamic spacetime geometry of Einstein and Planck in a preconfiguration or 'blueprinting' of the nature of space itself. A massless cosmology without the matter-antimatter constituent yet exhibits curvature due to dimensional generation of the branespace, which can be said to be algorithmic or information data based. then the concept of frequency as inverse time enabled this brane space to transform itself into the linear Minkowski metric in the manifesto of the lower dimensional expansion into the brane defined hyperspace of the de Broglie matterwave hyperacceleration.
The inherent quantum geometry from the branes so becomes quantum entangled in the lower D as the higher D in a holographically connected and intersecting superspace holofractally unified in the quantum geometry defined by prespacetime algorithms and is coupled to a simplified string formalism as boundary- and initial conditions in a de Sitter cosmology encompassing the classical Minkowski-Friedmann spacetimes holographically and fractally in the Schwarzschild metrics.
The magnetic field intensity B is classically described in the Biot-Savart Law:
B=μoqv/4πr2=μoi/4πr=μoqω/4πr=μoNef/2r
for a charge count q=Ne; angular velocity ω=v/r=2πf; current i=dq/dt and the current element i.dl=dq.(dl/dt)=vdq.
The Maxwell constant then can be written as an (approximating) finestructure:
μoεo =1/c2=(120π/c)(1/120πc) to crystallise the 'free space impedance' Zo=√(μo/εo)=120π~377 Ohm
(Ω). This vacuum resistance Zo so defines a 'Unified Action Law' in a coupling of the electric permittivity component (εo) of inertial mass and the magnetic permeability componen (μo) of gravitational mass in the Equivalence Principle of General Relativity.
A unified selfstate of the preinertial (string- or brane) cosmology so is obtained from the finestructures for the electric- and gravitational interactions coupling a so defined electropolic mass to magnetopolic mass respectively. The Planck-Mass is given from Unity 1=2πGmP2/hc and the Planck-Charge derives from Alpha=2πke2/hc and where k=1/4πεo in the electromagnetic finestructure describing the probability interaction between matter and light (as about 1/137).
The important aspect of alpha relates to the inertia coupling of Planck-Charge to Planck-Mass as all inertial masses are associated with Coulombic charges as inertial electropoles; whilst the stringed form of the Planck-Mass remains massless as gravitational mass. It is the acceleration of electropoles coupled to inertial mass, which produces electromagnetic radiation (EMR); whilst the analogy of accelerating magnetopoles coupled to gravitational mass and emitting electromagnetic monopolic radiation (EMMR) remains hitherto undefined in the standard models of both cosmology and particle physics.
But the coupling between electropoles and magnetopoles occurs as dimensional intersection, say between a flat Minkowskian spacetime in 4D and a curved de Sitter spacetime in 5D (and which becomes topologically extended in 6-dimensional Calabi-Yau tori and 7-dimensional Joyce manifolds in M-Theory).
The formal coupling results in the 'bounce' of the Planck-Length in the pre-Big Bang scenario, and which manifests in the de Broglie inflaton-instanton.
The Planck-Length LP=√(hG/2πc3) 'oscillates' in its Planck-Energy mP=h/λPc=h/2πcLP to give √Alpha).LP=e/c2 in the coupling of 'Stoney units' suppressing Planck's constant 'h' to the 'Planck units' suppressing charge quantum 'e'.
Subsequently, the Planck-Length is 'displaced' in a factor of about 11.7=1/√Alpha=√(h/60π)/e and using the Maxwellian finestructures and the unity condition kG=1 for a dimensionless string coupling Go=4πεo, describing the 'Action Law' for the Vacuum Impedance as Action=Charge2, say via dimensional analysis:
Zo=√([Js2/C2m]/[C2/Jm])=[Js]/[C2]=[Action/Charge2] in Ohms [Ω=V/I=Js/C2] and proportional to [h/e2] as the 'higher dimensional source' for the manifesting superconductivity of the lower dimensions in the Quantum Hall Effect (~e2/h), the conductance quantum (2e2/h) and the Josephson frequencies (~2e/h) in Ohms [Ω].
This derivation so indicates an electromagnetic cosmology based on string parameters as preceding the introduction of inertial mass (in the quantum Big Bang) and defines an intrinsic curvature within the higher dimensional (de Sitter) universe based on gravitational mass equivalents and their superconductive monopolic current flows.
A massless, but monopolically electromagnetic de Sitter universe would exhibit intrinsic curvature in gravitational mass equivalence in its property of closure under an encompassing static Schwarzschild metric and a Gravitational String-Constant Go=1/k=1/30c (as given in the Maxwellian finestructures in the string space).
In other words, the Big Bang manifested inertial parameters and the matter content for a subsequent cosmoevolution in the transformation of gravitational 'curvature energy', here called gravita as precursor for inertia into inertial mass seedlings; both however describable in Black Hole physics and the Schwarzschild metrics.
The Gravitational Finestructure so derives in replacing the Planck-Mass mP by a protonucleonic mass:
mc=√(hc/2πGo).f(alpha)= f(Alpha).mP and where f(Alpha)=Alpha9.
The Gravitational finestructure, here named Omega, is further described in a fivefolded supersymmetry of the string hierarchies, the latter as indicated.
This pentagonal supersymmetry can be expressed in a number of ways, say in a one-to-one mapping of the Alpha finestructure constant as invariant X from the Euler Identity:
X+Y=XY= -1=i2=exp(iπ).
One can write a Unification Polynomial: (1-X)(X)(1+X)(2+X)=1 or X4+2X3-X2-2X+1=0
to find the coupling ratios: f(S)|f(E)|f(W)|f(G)=#|#3|#18|#54 from the proportionality
#|#3|{[(#3)2]}3|({[(#3)2]}3)3=Cuberoot(Alpha):Alpha:Cuberoot(Omega):Omega.
The Unification polynomial then sets the ratios in the inversion properties under modular duality:
(1)[Strong short]|(X)[Electromagnetic long]|(X2)[Weak short]|(X3)[Gravitational long]
as 1|X|X2|X3 = (1-X)|(X)|(1+X)|(2+X).
Unity 1 maps as (1-X) transforming as f(S) in the equality (1-X)=X2; X maps as invariant of f(E) in the equality (X)=(X); X2 maps as (1+X) transforming as f(W) in the equality (1+X)=1/X; and X3 maps as
(2+X) transforming as f(G) in the equality (2+X)=1/X2=1/(1-X).
The mathematical pentagonal supersymmetry from the above then indicates the physicalised T-duality of M-theory in the principle of mirror-symmetry and which manifests in the reflection properties of the heterotic string classes HO(32) and HE(64), described further in the following.
Defining f(S)=#=1/f(G) and f(E)=#2.f(S) then describes a symmetry breaking between the 'strong S' f(S) interaction and the 'electromagnetic E' f(E) interaction under the unification couplings.
This couples under modular duality to f(S).f(G)=1=#55 in a factor #-53=f(S)/f(G)={f(S)}2 of the 'broken' symmetry between the longrange- and the shortrange interactions.
SEWG=1=Strong-Electromagnetic-Weak-Gravitational as the unified supersymmetric identity then decouples in the manifestation of string-classes in the de Broglie 'matter wave' epoch termed inflation and preceding the Big Bang, the latter manifesting at Weyl-Time as a string-transformed Planck-Time as the heterotic HE(64) class.
As SEWG indicates the Planck-String (class I, which is both openended and closed), the first transformation becomes the suppression of the nuclear interactions sEwG and describing the selfdual monopole (stringclass IIB, which is loop-closed in Dirichlet brane attachement across dimensions say Kaluza-Klein R5 to Minkowski R4 or Membrane-Space R11 to String Space R10).
The monopole class so 'unifies' E with G via the gravitational finestructure assuming not a Weylian fermionic nucleon, but the bosonic monopole from the kGo=1 initial-boundary condition GmM2=ke2 for mM=ke=30[ec]=mP√Alpha.
The Planck-Monopole coupling so becomes mP/mM=mP/30[ec]=1/√Alpha with f(S)=f(E)/#2 modulating f(G)=#2/f(E)=1/# ↔ f(G){f(S)/f(G)}=# in the symmetry breakingf(S)/f(G)=1/#53 between short (nuclear asymptotic) and long (inverse square).
The shortrange coupling becomes f(S)/f(W)=#/#18=1/#17=Cuberoot Alpha)/Alpha6and the longrange coupling is Alpha/Omega=1/Alpha17=#3/#54=1/#51=1/(#17)3.
The strong nuclear interaction coupling parameter so becomes about 0.2 as the cuberoot of alpha and as measured in the standard model of particle physics.
The monopole quasimass [ec] describes a monopolic sourcecurrent ef, manifesting for a displacement λ=c/f. This is of course the GUT unification energy of the Dirac Monopole at precisely [c3] eV or 2.7x1016 GeV and the upper limit for the Cosmic Ray spectra as the physical manifestation for the string classes: {I, IIB, HO(32), IIA and HE(64) in order of modular duality transmutation}.
The transformation of the Monopole string into the XL-Boson string decouples Gravity from sEwG in sEw.G in the heterotic superstring class HO(32). As this heterotic class is modular dual to the other heterotic class HE(64), it is here, that the protonucleon mass is defined in the modular duality of the heterosis in: Omega=Alpha18=2πGomc2/hc=(mc/mP)2.
The HO(32) string bifurcates into a quarkian X-part and a leptonic L-part, so rendering the bosonic scalar spin as fermionic halfspin in the continuation of the 'breaking' of the supersymmetry of the Planckian unification. Its heterosis with the Weyl-string then decouples the strong interaction at Weyl-Time for a Weyl-Mass mW, meaning at the timeinstanton of the end of inflation or the Big Bang in sEw.G becoming s.Ew.G.
The X-Boson then transforms into a fermionic protonucleon triquark-component (of energy ~ 10-27 kg or 560 MeV) and the L-Boson transforms into the protomuon (of energy about 111 MeV).
The last 'electroweak' decoupling then occurs at the Fermi-Expectation Energy about 1/365 seconds after the Big Bang at a temperature of about 3.4x1015 K and at a 'Higgs Boson' energy of about 298 GeV.
A Bosonic decoupling preceeded the electroweak decoupling about 2 nanoseconds into the cosmogenesis at the Weyl-temperature of so TWeyl=Tmax=EWeyl/k=1.4x1020 K as the maximum Black Hole temperature maximised in the Hawking MT modulus and the Hawking-Gibbons formulation:
McriticalTmin=.MPlanckTPlanck=(hc/2πGo)(c2/2k)=hc3/4πkGo for Tmin=1.4x10-29 K and Boltzmann constant k.
The XL-Boson mass is given in the quark-component: mX=#3mW/[ec]=Alpha.mW/mP=#3{mW/mP}~1.9x1015 GeV; and the lepton-component: mL=Omega.[ec]/#2=#52[ec/mW] ~ 111 MeV.
All inertial objects are massless as 'Strominger branes' or extremal boundary Black Hole equivalents and as such obey the static and basic Schwarzschild metric as gravita template for inertia. Once inertialised, the Newmann-Kerr solutions described by the literature become applicable.
This also crystallises the Sarkar Black Hole boundary as the 100Mpc limit (RSarkar=(Mo/Mcritical.RHubble)=0.028.RHubble~237 Million lightyears) for the cosmological principle, describing large scale homogeneity and isotropy, in the supercluster scale as the direct 'descendants' of Daughter Black Holes from the Universal Mother Black Hole describing the Hubble Horizon as the de Sitter envelope for the Friedmann cosmology (see linked website references on de Sitter cosmology) for the oscillatory universe bounded in the Hubble nodes as a standing waveform.
The Biot-Savart Law: B=μoqv/4πr2=μoi/4πr=μoNef/2r=μoNeω/4πr for angular velocity ω=v/r transforms into B=constant(e/c3)gxω
in using a centripetal=v2/r=rω2 for g=GM/r2=(2GM/c2)(c2/2r2)=(RSc2/2R2) for a Schwarzschild solution RS=2GM/c2.
B=constant(eω/rc)(v/c)2=μoNeω/4πr yields constant=μoNc/4π=(120πN/4π)=30N with e=mM/30c for
30N(eω/c3)(GM/R2)=30N(mM/30c)ω(2GM/c2)/(2cR2)=NmM(ω/2c2R)(RS/R)= {M}ω/2c2R.
Subsequently, B=Mw/2c2R = NmM(RS/R){ω/2c2R} to give a manifesting mass M finestructured in M=NmM(RS/R) for N=2n in the superconductive 'Cooper-Pairings' for a charge count q=Ne=2ne.
But any mass M has a Schwarzschild radius RS for N=(M/mM){R/RS}=(M/mM)
{Rc2/2GM}={Rc2/2GmM}={R/RM} for a monopolic Schwarzschild radius RM=2GmM/c2=2G(30ec)/c2=60ec/30c3=2e/c2=2LP√Alpha=2OLP.
Any mass M is quantised in the Monopole mass mM=mP√Alpha in its Schwarzschild metric and where the characterising monopolic Schwarzschild radius represents the minimum metric displacement scale as the Oscillation of the Planck-Length in the form 2LP√Alpha~LP/5.85.
This relates directly to the manifestation of the magnetopole in the lower dimensions, say in Minkowski spacetime in the coupling of inertia to Coulombic charges, that is the electropole and resulting in the creation of the mass-associated electromagnetic fields bounded in the c-invariance.
From the Planck-Length Oscillation or 'LP-bounce': OLP=LP√Alpha=e/c2 in the higher (collapsed or enfolded) string dimensions, the electropole e=OLP.c2 maps the magnetopole e*=2Re.c2 as 'inverse source energy' EWeyl=hfWeyl and as function of the classical electron radius Re =ke2/mec2=RCompton.Alpha= RBohr1.Alpha2=Alpha3/4πRRydberg= 1010{2πRW/360}={e*/2e}.OLP.
The resulting reflection-mirror space of the M-Membrane space (in 11D) so manifests the 'higher D' magnetocharge 'e*' AS INERTIAL MASS in the monopolic current [ec], that is the electropolic Coulomb charge 'e'.
This M-space becomes then mathematically formulated in the gauge symmetry of the algebraic Lie group E8 and which generates the inertial parameters of the classical Big Bang in the Weylian limits and as the final Planck-String transformation.
The stringparametric Biot-Savart law then relates the angular momentum of any inertial object of mass M with angular velocity ω in selfinducing a magnetic flux intensity given by B=Mω/2Rc2 and where the magnetic flux relates inversely to a displacement R from the center of rotation and as a leading term approximation for applicable perturbation series.
This relates the inherent pentagonal supersymmetry in the cosmogenesis to the definition of the Euler identity in its finestructure X+Y=XY=-1, and a resulting quadratic with roots the Golden Mean and the Golden Ratio of the ancient omniscience of harmonics, inclusive of the five Platonic solids mapping the five superstring classes. Foundations and applications of superstring theory are also indicated in the below and serve as reference for the above.
https://www.researchgate.net/publication/282073922_Physical_Consciousness_coupled_to_the_Biomind_of_Universal_Life
Technical Report Physical Consciousness coupled to the Biomind of Universal Life
Dear Tony,
To be honest, I do not see the significance of your (very long) post to my question. I read the first paragraph and was left confused.
Dear Niels;
Your question was regarding where the original symmetries regarding the standard cosmologies derived from. I answered this in saying that those symmetries relate to an inherent quantum geometry within the conceptualization of space itself. Furthermore this quantum geometry can then be modeled on superstrings and their generation from the dimensional mathematical foundations descriptive of the observed and measured properties of spacetime following and not preceding the standard models of Big Bang cosmology.
I am sorry you cannot see any relevance to your question in my reply.
I understand Niels' perplexities: he proposed a physical question concerning internal symmetries of the Standard Model. It's today the custom to lead each discussion of physics to the origin of the universe that is a metaphysical or religious question and non physical, like the fate of the universe. I have nothing on the contrary with respect to these discussions, but it must be clear that those questions don't concern physics but they concern metaphysics or religion. It is suitable a distinction: physics doesn't have the object to reach the absolute truth, but it has the object to understand in possible limits the behavior of nature with respect to observers..
The SM is conceived as a low-energy effective theory and does not explain its own structure. It looks almost unavoidable that a better theory is needed and there are many contenders for such a framework, Having no experimental input one can only speculate which one is the "true" one. To the best of my knowledge none of these theories offers an explanation for the internal SM symmetries. So you'll have to wait :)
Dear Ehud, I do agree with you, exept for the waiting. We better get on with finding a reason for this underlying structure, it has been a while since the latest real successes and some of the problems do not seem to want to disappear....
I see an anonymous researcher downvoted my last comment. I should like to know what is wrong in my comment from his viewpoint in order to have an useful discussion.
With regard to the Standard Model, it is known that for its endorsers that model is perfect and complete. The SM has internal symmetries and structures: for example the isotopic symmetry, the structure of values of spins for different particles, anomalous properties of mesons, etc.. The question is to establish if it is acceptable. On this account I proposed the Non-Standard Model in which first of all there is a neat distinction between massive electrodynamic particles and energy electromagnetic particles. For massive electrodynamic particles the NSM defines a new physical property: electrodynamic mass. A very important characteristic of electrodynamic mass is that it changes with the speed according to the relation m=mo(1-v2/2c2). It is possible to see in that relation electrodynamic mass decreases with the speed unlike the SR where mass increases with the speed. I know other researchers propose different solutions but it would be suitable to find common aspects for alternative solutions.
Dear Martine,
What can we do instead of waiting? I, as well as many hundreds physicists, work on analyzing the LHC data searching for hints for the physics beyond the SM. Unfortunately the no significant deviation from SM prediction has been detected. So we refine thins and probe deeper.
Theorists continue to develop various models but I suspect that without solid new experimental results they will have hard time focusing on the right direction.
Daniel,
I'll risk down-voting by supporting most of what you said in your first post. But I think that the object of physics is to get to the bottom of things, namely the "truth". Can we do it? only time will tell
Dear Ehud,
I agree with you fundamentally and thank you for support to most of what I said in my first post. I would want to specify nevertheless my viewpoint on the truth.
I think physicists are able to reach the "physical truth" that is based on a proven scientific method. I think a "metaphysical or absolute truth" also exists and this truth is impenetrable to physicists only because it is impossible to apply the scientific method to that truth. I refer basically to two questions: the origin of the universe and the fate of the universe. In The Standard Model the theory of bigbang is accepted in order to explain the origin of the universe that in that model would be verified by the discovery of Higgs boson. It is evident that particle, called Higgs boson, is only a particle at high mass and energy, strongly unstable, like so many particles that are known or unknown. Certainly many other particles, more significant that that, will be discovered in future. The theory of bigbang is only a theory like so many theories. At CERN physicists say they are making experiments that simulate the state of the universe an instant after the bigbang, but those experiments are just a virtual simulation and not the physical reality. I believe real experiments and I don't believe experiments based on simulations, animations and thought experiments that can be useful in technique only at the planning state. The Standard Model is only a theory, inter alias little verified in laboratory. I would want to raise now a problem that I raised already in another Question (for widenings let see my profile):
What is the relative speed between two particle beams that move into reverse, everyone with the speed of 250000km/s with respect to the reference frame of accelerator?
It doesn't seem to me that speed has been measured and for its determination still today physicists make use of a theoretical formula.
Hi Daniele,
I think you are a bit misinformed. Experiments at CERN are real ones and the LHC recreates the state of the universe some 10^-12 sec (or whatever) after the Big Bang. Obviously there are differences between such an experiment and the real thing (we do not recreate the Big Bang) but these differences are taken into account. Moreover, we tend to distrust the simulations that try to compute the outcome of LHC collisions and deduce the background in data driven methods.
Animations are used only when trying to explain the science to laymen.
There is also a confusion as we use the term "standard model" (SM) both for particles and their mutual interactions and for the evolution of the universe since the big bang. These are two different theories. The former is absolutely independent of the later and the latter partially depends on the veracity of the former.
The SM of particles is a very well tested theory. Hundreds of its predictions were confirmed by real experiments in LEP, SLC, Tevatron, and now at the LHC. The discovery of the Higgs boson corroborates a major prediction of this theory.
The cosmological SM is based on observations and not on experiments. It is also a very successful theory that, no doubt, catches a large chunk of truth.
Being scientific theories both are susceptible for changes and replacement by better theories. In the SM of particles we have a long list of arguments that indicate that this is an effective low-energy theory and will be replaced in the near future by a better one.
It is clear that the SM of particles cannot "understand" the creation of the universe. Attempts to unify it with Gravity allow one to theoretically study the universe at a time much smaller than that, and **speculate** about the creation. Only time will tell if eventually the creation can be explained by scientific methods.
Up to here we were discussing the border line between science and philosophy and I tried to answer with logics. But I saw your statement concerning two objects moving at ~80%c in counter-directions. I, ... Sorry for being so blunt but: wondered if you are a physicist?
The battle abut the veracity of relativity was over about 100 years ago. Good old Einstein got it right. Consequently Special Relativity is at the bottom of both SM that I discussed above. So the answer to your wired question is No, This relative speed was never measured directly since such an experiment is hard to make, However, SR was tested so many times that there is no way of altering the "theoretical formula" without destroying hundreds of calculations that have been corroborated by experiments. We don't need such an experiment as we can deduce from other measurements that the outcome will be in agreement with SR.
Dear Ehud,
Perhaps I am misinformed but certainly you make a big confusion between real experiment and simulated experiment. Confusion is in your words:
"Experiments at CERN are real ones and the LHC recreates the state of the universe some 10^-12 sec (or whatever) after the Big Bang. Obviously there are differences between such an experiment and the real thing (we do not recreate the Big Bang) but these differences are taken into account. Moreover, we tend to distrust the simulations that try to compute the outcome of LHC collisions and deduce the background in data driven methods".
Yes Ehud, I am a doctor in Electronics Engineering and not a doctor in Physics, but most contemporary physics regards the physical behavior of electron and perhaps about that behavior I understand more than a physicist. For the rest your comment is a duty defence of teories of postmodern physics in which I see a chink of light when you claim: "In the SM of particles we have a long list of arguments that indicate that this is an effective low-energy theory and will be replaced in the near future by a better one" . For me the future is already now.
Warmest greetings.
Niels, as you suggested, SM for particles is an empirical theory because it is based on the results of observations and experiments. SM is not based on the underlying structure of particles; this is needed before any theory of particles can have a theoretical basis. Derivation of a particle is a mechanical problem as much as it is a mathematical problem, if not more. Due to this, expertise from multiple disciplines seems to be required; with collaboration, we should be able to determine the fundamental make-up of mass, energy, and matter.
Without having some kind of initial understanding of how a particle is constructed and how it works, it seems rather impossible to correctly interpret data from experiments and testing (double slit or at the LHC, for examples). The collected data from experiments is currently interpreted in terms of a particle usually thought of as a point, sphere, or a transformation from a wave. However, these configurations do not provide information about the particle’s make-up.
Before mass, energy, and matter can be profoundly understood, a mechanical model that is consistent with a mathematical model of a particle (an electron, for example) needs to be established. Enough experiments and tests have already been made to allow for this accomplishment. Let us look at a practical example to see how this may be done.
When Maxwell developed his work, he could see both the macroscopic mechanisms and the data resulting from these mechanisms. All of this was provided by Faraday.
In the case of the electron, only the data resulting from its mechanism can be measured, but we don't know what its mechanism is because it cannot be viewed. Thus, in order to derive the electron, its model has to be fitted with a mechanical model that is underlain with mathematical formulations such that the data (properties) given by the electron is consistent with measured values and observations. The underlying mathematical formulations can be derived using the work of Coulomb, Maxwell, and Lorentz. These formulations can be shown to fit the mechanical model. Therefore, from this mechanical model, it can be seen how the properties of the electron are given mechanically, as well as mathematically.
Unfortunately, there is no other way to do this for particles. To view this procedure for the electron, please see the link.
A particle was found to be a tiny cylindrical B-field. Each half of the B-field oscillates inward and outward along the longitudinal axis of the cylindrical field; during an inward oscillation movement, each half-field compresses and during an outward oscillation movement, each half-field decompresses. Hence, a particle can be viewed as a standing wave due to the oscillations and the tiny field can also be viewed as a particle when measured and observed.
Just briefly, as shown in the link, a particle such as an electron is a structural formation of photon fibers (resulting in the above-mentioned B-field) that can develop in an environment that has a dense mixture of them (for example, in laser/particle interactions). The interactions that enable this formation result from the electric and magnetic properties of photons, as given in Maxwell’s wave equations. The photon fibers can only merge into a group (not sum into single units) because they have perpendicular elements. Individually, photons are massless; however, in a group, such as the electron, the photon fibers are interacting with each other. Due to this, a measurable force is required to accelerate the group. The required force to cause a given acceleration is proportional to the sum of the energies of the photon fibers in the group. The proportionality constant “m” results from this.
A plethora of hadron and lepton particles (and other particles) have been observed or inferred from collisions in particle accelerators. The dissociated and scattered photon fibers emanating from high-impact Bθ field (particle) collisions can appear as radiation or re-form into numerous uncommon hadron and lepton Bθ fields. These Bθ fields instantly form due to the vector summation of the scattered fibers (see the link for the vector summation mechanism) and decay equally rapidly because they have likely formed ad hoc from randomly oriented fibers ejected at high speed. Because the fibers may oppose each other and/or are desynchronized, they segregate and appear as radiation; or they form into an unstable Bθ field that may decay into a stable Bθ field such as a proton. However, multiple transitions to more than one type of Bθ field may occur before a stable Bθ field is obtained or the fibers completely segregate and escape as radiation.
Research Derivation of the Electron (Post #2 Updated)
Ehud, You mentioned in an earlier post: “I, as well as many hundreds physicists, work on analyzing the LHC data searching for hints for the physics beyond the SM.”
Here is a link to a recent study by Christian Klein-Böesing and Larry McLerran that is based on recent experimental results at the RHIC and LHC that seems relevant to your search: http://www.sciencedirect.com/science/article/pii/S0370269314003736
The title of the study is: “Geometrical scaling of direct-photon production in hadron collisions from RHIC to the LHC”. The study examines the relationship between photon production and hadronization of scattered particles that occur immediately after photon production, which occurs immediately upon high-energy particle collisions.
SM for particles is an empirical theory because it is based only on the results of observations and experiments. SM does not take into account the underlying dynamic structures of particles that give such results; this is needed before any theory of particles can have a theoretical basis that Niels is questioning.
Derivation of a particle is a mechanical problem as much as it is a mathematical problem, if not more. Without having some kind of initial understanding of how a particle is constructed and how it works, both mechanically and mathematically, it seems rather impossible to correctly interpret data from experiments and testing (e.g., double slit or at the LHC). The collected data from experiments is currently interpreted in terms of a particle usually thought of as an inert point or a tiny inert sphere, or a transformation from a wave. However, these configurations do not provide information about the particle’s make-up or its dynamics.
Attached is an article that examines the relationship between proposed dynamic models (e.g., for subatomic particles and the photon) and the recent study by Klein-Böesing and McLerran; (a summary of such study is included in the article). In the article, it is shown how and why a multitude of unstable ad hoc particles are created upon high-energy impacts of colliding particles. Although in this examination, a classical view of such interactions is derived, the mathematical framework of SM is still generally applicable.
Article Hadronization of Scattered Particles in High-Energy Impacts
Ref. 1) In a recent article in “World Scientific” called “Introduction to Hydrodynamics” by Sangyong Jeon and Ulrich Heinz, the authors deliver a general and pedagogic view of the relativistic hydrodynamics currently used in the study of ultra-relativistic heavy ion collisions. Dynamics of collective motion (hydrodynamics) is an integral part of the theoretical modeling of such events. In the theory of hydrodynamics, strong elliptic transverse flow of scattered particles upon collision of heavy ions (such as Au+Au) is predicted, and is consistent with experimental results at the RHIC and LHC.
ISSUE: However, ‘elliptic flow’ is not predicted in the theory when the colliding particles are small, although such flow (or jet flow) does occur in such experiments. Regarding this, the authors’ comment is in italics: “More recently, the systems created in the highest multiplicity proton-proton collisions and proton-nucleus collisions were also seen to exhibit strong collective behavior. This is deeply puzzling, as the size of the system ought to be too small to behave collectively. It is hoped that more thorough investigation of the possible origin of the collectivity in such small systems can illuminate the inner workings of the QGP formation greatly.” A link to the article is given here. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwj36JObpYLMAhVMRiYKHREsBwwQFggdMAA&url=http%3A%2F%2Farxiv.org%2Fpdf%2F1503.03931&usg=AFQjCNHsY6a-9HxTY-NvVJTf62QGtAEEoA
Ref. 2) In another article in “Physical Review Letters”, called “Observation of Direct-Photon Collective Flow in Au+Au Collisions at √sNN = 200 GeV”, by A. Adare et al. (PHENIX Collaboration), the authors examine the second Fourier component v2 of the azimuthal anisotropy [transverse elliptical flow] with respect to the reaction plane measured for direct-photons at mid-rapidity and transverse momentum (pT ) of 1–13 GeV/c.
ISSUE: For thermal photons (pT < 4 GeV/c), the authors noted a positive direct-photon v2 is observed and is comparable in magnitude to the pion v2 and consistent with early thermalization times and low viscosity, but its magnitude is much larger than current theories predict. A link to the article is given here. http://arxiv.org/pdf/1105.4126.pdf
Ref. 3) In a recent article in ELSEVIER’S “Physics Letters B”, called “Geometrical scaling of direct-photon production in hadron collisions from RHIC to the LHC” by Christian Klein-Böesing and Larry McLerran, the authors show that geometric scaling provides a good description of the energy dependence of photon production in heavy ion + ion collisions, including p + p and deuteron + Au scattering. Geometrical scaling, which is the scale invariance of transverse expansion of particles, is only dependent on the saturation momentum of parent particles in the longitudinal interaction (collision). In hydrodynamic expansion, this can only occur in the initial stage of such interaction.
ISSUE: The authors summarize their concern in their statement following in italics. “Thus, our observation indicates that direct-photon production occurs mainly before the scale breaking effects of particle masses and system-size becomes important. The former would be true if the system produces photons at an energy scale large compared to meson masses, which might be possible. The latter is more difficult, since flow measurements for photons demonstrate that they do have an azimuthal anisotropy [elliptic flow] with respect to the event reaction plane. This is conventionally associated with transverse expansion and it requires that the photons be produced at times where the size of the system actually is important, [i.e., emitted incidentally after the collision from quark and gluon plasma (QGP) that arises from such collision].” A link to the article is here. http://www.sciencedirect.com/science/article/pii/S0370269314003736
EXAMINATION: In the references above, we notice that photon production likely occurs immediately upon collision of parent particles based on geometric scaling, not from the QGP, per Ref. 3; however, there is a discrepancy because one would expect the photons to be emitted from the QGP since there is transverse elliptic flow of the medium.
Nevertheless, on the other hand, the scattered photons, irrespective of any medium such as QGP, may undergo transverse elliptic flow themselves if there is a sufficient quantity of them. This is consistent with bulk dynamics of collective motion (hydrodynamics) as shown to be necessary for elliptic flow in Ref. 1; further, this would alleviate the concern in Ref. 1 that strong collective behavior is not possible when the colliding parent particles are small. For example in experiments, the scattered photons are exhibiting such collective behavior, instead of scattered quarks and gluons, which are not of sufficient quantity.
This concept also addresses the issue presented in Ref. 2. For example, if photons are scattered from the collision, not incidentally from the QGP, then the transverse elliptic flow is given by the photons, not the QGP. Thus, the quantity of such photons is significantly greater than the expectation amount of incidental photons from the QGP. This, in turn, would yield a greater magnitude of the second Fourier component v2 of the azimuthal anisotropy due to a stronger collective motion of the photons.
In current theory, high-energy particle collisions lead to the production of QGP, then hadronization of the QGP occurs giving mostly unstable hadrons, then decay of such hadrons occurs, and from this, successive re-hadronization to other particles and/or emission of photons occur. However, the results of high-energy experiments and theories alluded to in the references above may point to different dynamics. For example, the dynamic structures of hadrons may only consist of photon formations. Such photons are scattered upon collision of hadrons, then hadronization of the photon plasma occurs instantly giving new mostly unstable ad hoc formations of photons (hadrons), these hadrons then decay due to their random ad hoc origination, and from this, successive instant re-hadronization to other particles from the re-emitted photons occurs, and/or re-emitted photons do not re-form and leave the interaction area. Examination of current theory and proposed dynamic models of hadrons consistent with hydrodynamic theory and experimental results are given in the attached link.
Article Hadronization of Scattered Particles in High-Energy Impacts