If they are elementary (structureless), why does nature start constructing composite objects so late though there might be lots of possibilities to make it up earlier between Planck scale and hadronization scale? May it be considered as some extra hint to Grand Unified Theories predicting a large QCD coupling constant just on a O(GeV) scale?
Thanks, but hardly the rhetorical ("Why not?") or philosphical ("... the only elementary thing in the universe is mathematics...") answers may properly work in this case... Some kind of answer could be that quarks have to be considered together with leptons (due to the hypercharge anomalies in Standard Model). If so, the composite quarks would then require that leptons were composite as well. It is natural to think that this universal quark and lepton compositeness could only appear just at the scale of the Grand Unification where quarks and leptons are equivalent ( being the members of the same multiplet). But this scale is actually of the order of the Planck scale (may be only by the 1-2 order less). So, to finalize: it seems that quark-lepton unification at the Planck order scale disfavors an existence of possible composite structures from Planck scale (10^19 GeV) down to the hadronization scale (1 Gev). Nature prefers (at this energy interval) unification rather than compositeness, so to speak. Of course, there are many other questions, such as - why quarks and leptons are so light, how many quark-lepton families exist, why their flavors are so strictly conserved etc.and etc. - are still left open.
In my model, arXiv:0908.0591, quarks appear indeed on equal foot with leptons, their fields describe waves in a more fundamental, ether-like entity. Thus, they are not composite, but also not fundamental, in the same way as phonons in standard condensed matter theory are not composite, but also not fundamental.
Juansher, you only duplicate standard arguments, but do they (still) apply? Is it necessary that the mass of an elementary particle be near the Planck mass? Why? Why not? Knowing that in the standard model, the mass of the quarks is zero. I seem to remind that the grand unification failed. Why should it success after all?
In a nutshell, your question is way too fundamental in comparision to our current knowledge. The theories work not so bad, but that doesn't mean that the idea we put behind have any kind of relevance, that is, that their extrapolation should have anything to do with the real reality.
You are too pessimiistic, Claude. The situation is not so bad. Standard Model very properly works and Grand Unification as well, especially if you keep in mind its supersymmetric version. Further, mass of any particle is uncontrollably increased up to Planck mass due to radiative corrections, unless it is protected by some strict symmetry - say, chiral symmetry for fermions, or supersymmetry for bosons. It is just what we reliably know from Quantum field theory. So, returning to my question, I think, it looks like that there appears some interesting dilemma related to Compositeness vs Grand unification, and Nature chooses unification. As to the idea that the SM fermions "are not composite, but also not fundamental", I like it but I cannot see - what we could understand better in quark-lepton physics if would accept it? Anyway, when question arises, it seems reasonable to look for answer first in the well established background and only then outside.
You may have misunderstood my answer. What I have proposed in arXiv:0908.0591 is not simply an idea but a particular model which has this property that the fermions are quite similar to phonons, thus, nor fundamental nor composite. And this model gives a lot, it predicts the number of fermions (all three generations), the gauge group, and all the femion charges, and all this from simple first principles. This model would not work with two or four generations, two or four quark colors, or electroweak triplets or so, so it explains a lot of the things we observe in the standard model.
In particular, it would exclude Grand Unification models.
It is rather a philosphical question of a long tradition, started with the concept of an atom of Democrit. We are going down and down into matter with time and we cannot be sure this process will end. Actually leptons and quarks are structerless but this is a purely phenomenological definition. If the string theory is a correct fundamental theory then particles are analogs of phonons and the question of compositness becomes meaningless.
The answer of Adam Jacholkowski points into the right direction. Even today, particles are not considered fundamental in quantum field theory. Fields are the fundamental entities. Particles are just quantized excitations of the fields. Phonons are quantized excitations of the field of lattice vibrations, so in this sense they are neither more nor less fundamental than elementary particles. The question of compositeness and of fundamentality are therefore not the same. Even a non-composite particle is not fundamental. So if leptons and quarks were composite, it would simply mean that there exists an additional lower layer of excitations of the fundamental field. It would not lead to "fundamental particles". This paradigm has gone for some time already.
Supersymmetry, strings, grand unification (disproven by the lifetime of the proton), these aren't theories, but speculations (they rest on none experimental facts.) At the Planck scale anyway, the notion of particle loose sense. What are we speaking about then?
Let me answer to Ilja first. Could you briefly write here about "simple first principles", you mentioned, which underly your model. And also - why Grand Unification is excluded in this approach. This may be interesting for many people.
As to Adam's remark, I agree that this question goes back to ancient Greeks by its philosophical side. But we discuss physics now and particularly the question whether the compositeness process stops on quarks when going up to high energies. Actually, the last compositeness precedent was at the GeV scale (hadronisation) and never happens more up to the present LHC energies (which are over 10 thousand times more). The question is whether we may expect it at higher energies or not at all. In this connection, any reference to the string theory is largely irrelevant because we don't know at present which entities are generic for strings as their basic modes - quarks and leptons or "preons" from which they could be composed.
Well, philosophically the atom was insecable, then the nucleus, then the nucleon. At the time we expect the quark not to be, it will. That is, predictions have always been foiled when they were speculative. Actually, we have no theory that can say us in a precise way what should be composite and what should not, because our theories don't rest on experimental data of that nature. The Planck scale is just a collection of fundamental constants stringed together so and so, we can't even know whether that has any meaning, save that all our theories loose every meaning at that scale. In other words, the Planck scale represent the limit of our knowledge, so that there is no way to decide what must happend up to there.
Just as the "physical electron" is an infraparticle and does not correspond to the non-interacting "elementary particle" or field excitation which appears in perturbative quantum field theory, quark fields just serve as coordinatizations of the underlying theory which has asymptotic states as protons (if stable) etc. An electron is dressed by the electromagnetic field, so is it elementary? What a quark is depends on the way you look at it. From a very naive point of view, one could argue that if there were subquarks, they would be confined to a very small region, implying a cancellation of high momentum and kinetic energy versus binding energy,
since otherwise they would have a large mass. This fine tuning problem then could be related to a symmetry of the "subquark" theory. Such considerations, however, are dubious at best.
@Juansher Chkareuli: It is hard to explain in a few lines, see ilja-schmelzer.de/matter for some simple introduction. A quite simple lattice of elementary cells with some other medium between them. The gauge fields, which describe different types of distortions of the lattice and the material between the cells, follow principles like preservation of the symplectic structure on the phase space, of Euclidean symmetry, of existence of a lattice realization, of neutrality of the vacuum state, and anomaly freedom. Without anomaly freedom there would be a possibility for an additional axial boson acting only on upper particles.
There are two types of gauge fields which appear in this approach. First, distortions of the medium between the cells, which may be nicely described by Wilson gauge fields, and gives the strong interaction. Then, distortions of the lattice itself, which distorts the lattice Dirac equation, and gives the weak interaction. The EM field is a combination of them, so, this model has already some "unified" aspects.
But there is no place for the additional fields of the usual GUTs.
In my book, now out of print, quarks and leptons are the same, being elementary particles The model put forth in the book derived the mass of the Up and Down quarks within observation, predicted the slightly smaller than classical size of the proton, and that quark/gluon plasma would behave as a perfect liquid.
However that is not as impressive as it sounds.
My work is in trying to build simple physics models to introduce advanced physics concepts at the undergraduate level.
My book and papers derived from the basic research in it are on my Academia site, if you care to look. I really should upload them here sometime.
http://independent.academia.edu/CharlesLaster
Point is, while my work is not the Standard Model, being semi-classical, quarks are still elementary particles. It might be possible to construct a theory where they are not, like the sub-quark theory mentioned, but I would have to be convinced.
If I remember correctly , in the past there were some efforts to introduce preons as more elementary objects then quarks, but apparently this idea died by itself.
Several models have been proposed for preons, the hypothetical constitutentes known subatomic particles. One of them was conceived in 1979 by Haim Harari then at the Stanford Linear Accelerator, and Michael A. Shupe, University of Illinois at Urbana-Champaign. His proposal included two types of preons and their corresponding antiparticles. Preons proposed both are represented as "+ y 0". The load has a Preon + +1 / 3, the 0 no electric charge. Each quark and lepton consists of three preons.
For a recent exposition of some basic aspects of composite theories of weak bosons and leptons etc. , see here : http://cds.cern.ch/record/1541686 , by the person (Harald Fritzsch) who co-invented color. And here : http://cds.cern.ch/record/1510343?ln=en . The whole idea of preons, technicolor etc. has been pursued for some time, but will clearly get new impetus if signatures of compositeness are discovered in the near future at the LHC.
Preons have been brought up as a quark building block. So far at the LHC there is no evidence. The nub is in computed cross section of the Preons . Still though this model has been about since 1979 and it MAY be confirmed by the LHC in 2014
I am afraid LHC will restart only in 2015 and not immediately with a full (design) power:, most probably with 13 TeV collision energy. Then compositness of quarks or fundamental bosons will be certainly one of the topics of the flagship experiments - ATLAS and CMS.
It is a matter of choice: if you are a Standard Model believer or worker then this question is worth solving. If, on the other hand, you believe or work for the simpler chords hypothesis, then it is not necessary to search for an answer, since the smallest part would always be a chord ( or a string if you prefer the english word).
Personally, I think that universe is playing games with us, the arrogant scientists, by leaving us to build almost equivalent, at their predictability, theories for explaining it. I cannot prove my claim, but history of science is with me.
For the main question: the LHC at full capacity will give an answer.
Since we remembered CERN, which level of energy is required in order to search for super-strings?
Any direct test of the super-string theory at the LHC is out of question, the top LHC energy is of course much too low for that. But any sign of supersymmetry or of extra dimensions could be an interesting indication of its correctness. See for example:
http://superstringtheory.com/experm/exper4.html
The standard model does not explain why a proton and an electron have exactly the same electric charge - up to the sign of course. So there must be something beyond the standard model that links the lepton and the hadron families. The subquark / preon model is a fascinating idea. It would be interesting to understand why it has not work so far.
It might be that quarks are neither elementary nor composite. Sounds strange, I know. But there is an interesting feature of quarks that leads to this interesting conclusion. (Physics is a lot of fun, so please do not criticise me for this crackpot idea)
How many quarks are there inside proton? Layman's answer is 3. But this is wrong. It is amaizing that the number of quarks observed insude proton depends on the reference frame of an observer. It is an experimental fact.
The faster observer moves w.r.t. proton, the more quarks (and anti-quarks) he/she observes: 3,5,7,9 etc.
How comes that the number of proton constituents is dependent on OBSERVER's reference frame?
The simplest (but very strange) way to explain this is to assume that quarks are moving faster than light inside proton!
We know from special relativity that time-like object can be at the same place at different times. Similarly, space-like (faster than light) object can be at different places at the same time.
For simplicity, imagine sinusoidal curve on space/time (X,T) diagram: T=sin(X). This curved line corresponds to superluminal motion. The line T=0.5 will cross the curve many times.That means that superluminal object can be at many places at the same time.
When the line goes "up", we interpret an object as "particle", when line goes "down", we interpret it as "antiparticle". The "particle-antiparticle pairs" are "created" at T= - 1 and "annihilated" at T=1.
If we will change the reference frame, we will see "more" intersections" of the T=const and the curve (or at least "more frequent" intersections, since T=sin(X) will produce infinite number of intersections with T=0.5. In this sense, T=sin(X) is not a good example). We will interpret it as "more quarks and antiquarks" observed from moving reference frame.
So, my crackpot idea is as follows:
Proton consists of only one (!) superluminal particle, but not from 3 (5,7,9, etc) quarks and antiquarks. In the rest frame we observe this particle at three different places at the same time (twice we interpret it as quark, and once - as anti-quark).
This explains why the number of quarks is dependent on the reference frame.
It also explains the "confinement" problem, i.e. why isolated quarks had never been observed.
And, finally, that means that quarks are neither elementary nor composite. Crackpottery? Maybe.
A very recent article is related to the subject of this thread. The conclusion of such 2016 HEP article by the PHENIX Collaboration at RHIC called “Recent PHENIX Results on Hard Probes and Direct Photon Production” states: “Theoretical models need to assume very early emission of direct photons from the interacting system [to] describe large excessive yields of direct photons. Large values of flow coefficients point to late emission of direct photons when temperature is lower but collective flow has time to develop. These two observations are difficult to reconcile within the same models and there is still no satisfactory description yet available.”
The above issue is usually referred to as the “direct photon flow puzzle” in the QCD/HEP fields. Any physical model of the photon (thus, hadron) is incomplete until it can be integrated with physical models of subatomic particles since all of the many types of photons are emitted and absorbed by these particles. Therefore, photons must be the constituents of the dynamic structure of each subatomic particle and they must also be consistent with and generate the particles’ properties, which include energy, mass, gravitational potential, a cylindrical B-field that is mobilized while a particle translates, magnetic dipole field, Coulomb’s electric force potential, and Lenz’s law.
If it is possible to *completely* scatter all the photons making up the pair of projectiles at RHIC, and to also measure the individual energy of each such scattered photon before such photons re-combine to form hadrons at the PHENIX, the collaboration should find that the sum of all the scattered photon energies will equal the sum of the energies of the two colliding projectiles. This would enable an explanation for the “direct photon flow puzzle”.
In addition to integrating the photon model with models of subatomic particles, the dynamic structures and models of subatomic particles must be consistent with the dynamic structure and model of the atom and its properties, which include the nuclear force, stability of the atom under the electric force, and Pauli’s exclusion principle.
In addition, the models for the photon, electron, nucleons, and nucleus have to be consistent with experimental results from high-energy particle accelerators, such as the RHIC. For example, enhanced photon production, enhanced collective photon flow, and hadronization of multitude of ad hoc particles occur in such high-energy particle collisions. Here is a link to a recent study on such phenomena, which is based on recent experimental results at the RHIC and LHC: https://www.researchgate.net/publication/298213589_Hadronization_of_Scattered_Particles_in_High-Energy_Impacts
The link also gives general descriptions of derived models for the photon, subatomic particles, and the atom that were referred to above. It also examines the inter-relationship between these models and the high-energy study.
As shown in the link, the dynamic structures of hadrons consist of photon formations. Such photons are scattered upon collision of hadrons, then hadronization of the photon plasma occurs instantly giving new mostly unstable ad hoc formations of photons (hadrons), these hadrons then decay due to their random ad hoc origination, and from this, successive instant re-hadronization to other particles from the re-emitted photons occurs, and/or re-emitted photons do not re-form and leave the interaction area.
Hadronization of photon plasma is similar to that, which occurs in laser-particle interactions. Photons can assemble into particles, such as the electron or hadron, when there is sufficient resonance, density, and mixture of them. For example, environments that have a sufficient resonant and dense mixture of photons are those that immediately follow laser-particle interactions and collisions of particles in particle accelerators. The interactions that enable particle formation result from the electric and magnetic properties of photons, as given by Maxwell’s calculus and EM equations. See the link for derivation of this process.
Article Hadronization of Scattered Particles in High-Energy Impacts