Can anyone suggest any information, references to papers of trying to evaluation the change in entropy of a separate biological organism during life? I'm interested in ideas, methods or data, from empirical estimates to the mathematical models. Any information from protozoa to human level.
Jose,
Jeremy England's "A New Physics Theory of Life" is interesting but it lacks the "infantery" in the "war of life"!
There's an interesting general-level discussion on this in Nick Lane's 'The Vital Question'
Unfortunately present day physics does not cover processes like self-organization and the constructive role of temperature in a living system – not even a good definition of the latter has been agreed upon. The best starting point might be to digest the Prigogine sub-dynamics that attempts to come to grips with entropy production etc. at non-equilibrium, i.e. for so-called dissipative systems.
Dear all, perhaps entropy here is not mostly important. Perhaps there is something more suitable for the process of dissipation and accumulation of errors and damage in the system (including biological)? Entropy seemed to me a good candidate for (one of) the parameters, which can be linked to the destruction in the system. Here are a few conceptual considerations...
If we consider any system as a dissipative structure, that appears to be true (or if you prefer, hypothetically), it is possible to build generalized unification with internal classification of such structures, including relation to entropy.
Let us assume in accordance with definition, that system is a set of interrelated elements, isolated from the environment and interacting with it as a whole, and integrity of the system means that in some essential aspect value of elements relations in the system is higher than the value of relations of system elements with elements of external systems
Then the essential aspect of elements relations in the system (regardless of the any possible classification) has impact on the energy exchange with the environment of the system. With this logic easy to see that, for example, the car receives fuel from the external environment for similar reasons, as organism gets food. It is obvious that any structure can not be considered in isolation from the surrounding and enveloping systems. This means (in addition to essential internal aspect) "entanglement" of the system with its environment.
For any dissipative structure we can identify at least two major stages of life (which significantly affects entropy): stage of synthesis (assembly) and dissipative stage (memory, with progressive destruction). For non-live structures (mostly, not organic or artificial origin) the assembly stage may be very short, and sometimes even can be considered as instantaneous, compared with the stage of dissipation.
For organic structures with the quality of life, the relation between the stages may be varied, depending on the dynamic configuration and way of integration in the higher-order enveloping system. Furthermore, for such structures there is a plateau region, during which dissipation of structure is not significant, as overall changes in parameters. This stage ends with the so-called death, and then the processes are accelerated sharply due to the loss of internal significant advantage over the system environment.
Depending on the level of self-organization, the critical point may be located beyond the point of period-doubling bifurcation (splitting unicellular and some potentially immortal simple multicellular organisms with "negligible senescence").
For humans period of increasing the probability of death with geometric progression is very interesting, which in fact corresponds to a process with a typical half-life ("An empirical observation with no satisfactory theoretical explanation is that mortality rate often increases exponentially with age. In humans in technologically developed societies, mortality rate after about age 35 doubles every 7-8 years [Finch C. E. Longevity, Senescence, and the Genome, 1994]"). Furthermore, for complex organisms such as humans many parameters are changed during life in the strong relationship with two listed typical stages (http://www.bbc.com/future/story/20150525-whats-the-prime-of-your-life).
Obviously, the "entangled" relationships between the various structures (such as human and vehicle, for example) can not be described within the classical approach with basic interactions. In addition, the exchange and accumulation of information there is in the biological systems consisting even of protozoa (otherwise we would not have appeared in the process of evolution).
So, as it seems to me, it is possible to generalize the concept of Prigogine's dissipative structure.
Then, the relationship of such a generalized concept of structures with Liouville's theorem is not less interesting than it entropy. Perhaps, there is something that we are missing for structures perceived as stable, or understood in the framework of classical concepts in a different way (mass-related topics for particles or anything else)?
I think the question: the change in entropy of a separate biological organism during life? is related to Why does life exist?
From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity.
I recommend watching the following website:
https://www.researchgate.net/post/Changes_in_the_overall_entropy_of_a_single_biological_organism_during_lifetime
The negentropy of a living organism can be expressed as [entropy if the atoms required to comprise a living organism are arranged in chemical-equilibrium form (CO2, H20, N2, etc.) at ambient temperatures] minus (entropy if these same atoms are arranged as a living organism). The negentropy increases until a living organism reaches its prime, and then decreases. The negentropy entails the surroundings as well as the living organism itself, because the chemical reactions required to rearrange atoms from equilibrium forms such as CO2, H2O, N2, etc., to a living organism are endothermic, and thus entail energy extraction from the surroundings and hence decreases in the entropy (increases in negentropy) of the surroundings, in addition to the negentropy within the organism itself. Of course, the negentropy required to produce any living organism is ultimately paid for by the far greater negentropy represented by the disequilibrium between the 5800K solar disk and the 2.7K cosmic background radiation.
Dear Vasiliy,
I asked my coworkers and this is the result.
1. The same link as Jose William gave
2.www.ldolfhin.org/mystery/chapter7.html The Mystery of life´s origin: Chapter 7: Thermodynamics of living Systems
Best
Peter
I do not think there is an answer to your question.
entropy is not defined for a living system
In any case, there should be implied entropy in terms of information theory, rather than narrow thermodynamic value, which certainly contributes.
From the common sense and logic of self-organization the thermodynamic concept of entropy, as well as the concept of information entropy are not suitable for measure of order (complexity) of dissipative systems. Firstly, dissipative structures are periodic. What in terms of entropy is already redundant information. Secondly, the most uniform thermodynamic state for dynamical system corresponds to white noise, ie maximally disordered chaotic state.
From our viewpoint of self-organization the most hard-ordered (unreachable) state is a situation with accumulation of all energy by single degree of freedom in the system. But stable dissipative structures in fact occupy an intermediate state in accordance with the internal "Goldilocks" logic.
We need a fundamentally different criterion of dissipative structure order (or complexity?).
Dear Jose, carbon atoms have a bunch of interesting features, for example, they unexpectedly actively interact even with neutrinos, regardless of the position in a living organism or inanimate chemicals.
I suspect that it is practically impossible to replace carbon atoms to living organisms by anything in our universe.
PS: You forgot to specify the correct link. Please, prompt it.
I'm sorry is:
https://www.quantamagazine.org/20140122-a-new-physics-theory-of-life/
I still maintain that the Second Law is not applicable to any living system
for details see Information, Entropy, Life and the Universe
Arieh
G. Gladishev wrote a lot of papers about change of thermodynamic entropy during life span. L. Hayflick also wrote several papers. M.Popovic wrote about thermodynamic and information change during life. Silva &Anammalai published one paper...
I could give you more detailed answer if you specify what kind of entropy you mean (Termodynamic, Shannon or residual)
Actually both the first and the second law are applicable to both living system and universe. See Prigogine, Schrodinger, D. Layzer, L. Hayflick, G. Gladishev, L. Hansen, M. Popovic, Silva... a lot of books and papers in a last 60 years
Jose,
Jeremy England's "A New Physics Theory of Life" is interesting but it lacks the "infantery" in the "war of life"!
Marko,
Shannon Entropy closest, perhaps... entropy of the dynamical system (metric entropy or topological entropy).
Aleš,
Thanks for the link, it is interesting. Obviously, living organism, maintaining a stable state (in fact, for warm-blooded - a stable temperature), unlikely undergoing significant fluctuations in thermodynamic entropy not associated with any activity. Highlighted in red only confirms that the entropy has little use for evaluating the complexity of system.
Any type of Entropy (Thermodynamic, residual or Shannon) cannot be negative. Entropy is by its definition nonnegative property. S>0 or S=0 at absolute zero.
Von Neumann argued in 1940 that: “Whoever uses the term ‘entropy’ in a discussion always wins since no one knows what entropy really is, so in a debate one always has the advantage”
Vasily
Due to Cell division and DNA replication - the amount of information (I) stored in DNA, RNA, proteins...increases during life. Thus the Shannon entropy (Š) also increases.
Aleš cited (Hansen/Batley conclusion) one crucial fact: "(Thermodynamic) entropy of any living organism per unit of weight does not change through its life."
However, weight (mass) of any living organism increases during life as a consequence of growth and accumulation of substances at constant temperature. Thus the thermodynamic entropy also increases.
It means that both thermodynamic entropy and Shannon entropy of a system (cell, organism) increase during life. Thus the total entropy of the system (cell, organism) also increases according to the second law during life.
This is a long story.
Junfeng Liu and Yi Wang made insightful comments on this bathtub-shaped failure rate model in :
Dear Vasiliy,
You might be interested to know that I have developed a theory in relationship to your question :=)
Following my investigations on the subject, the most general approach is that of « complex adaptive systems » (see my definition in a working paper on my RG page) subject to external loading (or stressor agents).
Indeed, complex adaptive systems (with many positive and negative feedback loops of several orders between their components) evolve in three stages when steadily subject to external loading factors : (1) a first stage of initial adaptation ; (2) a second stage of steady evolution (which could be called its “true life”) ; (3) a third stage of exhaustion up to collapse.
The evolution with time (or the ageing) can be seen that way for many systems :
Crevecoeur, G.U. A Model for the Integrity Assessment of Ageing Repairable Systems. IEEE Trans. Reliability, 42(1):148–155, 1993.
Crevecoeur G.U.. Reliability assessment of ageing operating systems. Eur. J. Mech. Eng., 39(4):219–228,1994.
Junfeng Liu, Yi Wang. On Crevecoeur’s bathtub-shaped failure rate model. Computational Statistics and Data Analysis, 57: 645–660, 2013.
See link :
http://www.sciencedirect.com/science/article/pii/S0167947312003052
Crevecoeur, G.U. A mechanical system interpretation of the nonlinear kinetics observed in biological ageing. Bull. Soc. Roy. Sci. Liège, 69(6):311–338, 2000.
Crevecoeur, G.U. A system approach modelling of the three-stage nonlinear kinetics in biological ageing. Mech. Ageing Dev., 122(3):271–290, February 2001.
The curve has usually a shape roughly given as follows : (1) power law increase with time (concave from below), (2) linear increase and (3) exponential increase (convex from below). This curve could be called : “evolution curve” or “ageing curve”.
But this is also the typical shape of a creep curve. The same 3-stages are seen in a creep curve of metals (if the constant temperature is above a threshold temperature 0.3 to 0.5 the melting temperature in K). The strain of metals subject to constant load (stress) and temperature evolves according to a 3 stage sequence : 1ary creep (“learning”, adaptation), 2ary “steady state” creep and 3ary creep (exhaustion) up to rupture. See the enclosed upper curve for creep in 3 stages as taken from following link :
http://practicalmaintenance.net/wp-content/uploads/Typical-Creep-Curves.jpg
Why is this curve called the "creep curve” although as mentioned before the curve fits more general phenomena (ageing of mechanical devices, ageing of biological entities, ….) ?
The answer is very simple : because it has first been seen when performing creep tests. See da Andrade and many other authors since the years 1910 (E.N. da C. Andrade. Proc. R. Soc. A, 894 :1-12, 1910)
This early discovery of the shape of the creep curve is probably related to the fact that the 3 stages (power law, linear, exponential) are especially neat for metals due to specific characteristics of metals among 3D-bodies. Metals are very ordered and regular 3-D crystals and the only ones exhibiting the “metallic bond” (different from others bonds like : ionic, covalent, Van Der Waals, …., as you know) : peripheral electrons don’t belong to a given atom, but to many which gives a rather homogeneous and predictible behaviour at macroscopic level. Moreover, during creep tests testpieces are subject to carefully maintained laboratory conditions (constant stress/load, constant temperature, humidity, …). Therefore the curve develops nicely according to the 3 stages above.
However, as mentioned above, this behaviour would not be proper to creep.
The « evolution or ageing curve » can be simply expressed by following equations :
E (t) = k.exp(α.t).tβ (1)
Or after derivation :
(1/E).dE/dt = α + β/t (2)
With E(t) an evolution marker for the system under study (e.g. strain for creep) , α=1/ti ≥0 (the evolution is stopped after the 1st stage if α=0) , 0 β/t during the inflationary era and β/t >> α during the radiation – and matter – dominated era’s (provided of course there are transition periods between era’s).
And we see that if we want to express the expansion of the Universe by one single equation for the Hubble parameter, equation (2) matches it.
So, I’m not really astonished when I read another (old) RG question :
https://www.researchgate.net/post/Can_fracture_mechanics_predict_when_the_whole_Universe_will_fracture
and look at the figures shown with this question (together with the a.m. link to a creep curve) and given below =)
http://www.sciencedirect.com/science/article/pii/S0167947312003052
According to Prigogine entropy is constant (approximately) in living organism, Vasiliy.
Marko,
DNA, cells, live organics are a recurring periodic information (I focused on this above) and with the period doubling during cell division do not have a direct relation to the Shannon entropy (if involve only information layer DNA or some other information layer), besides, the system volume is not preserved in the phase space. As you noticed, mass can grow, or something else.
First of all I am interested in the topological entropy (Katok, A. & Hasselblatt, B. Introduction to the Modern Theory of Dynamical Systems Cambridge University Press, 1997), because it is well connected with the complexity of the system. Approximately its meaning is as follows: for an unknown starting point, how much information should be obtained on one iteration to predict a large number of iterations with a small fixed error.
How easy is to calculate change of it, at least for the cell? - it is a separate issue. It is clear that with each doubling of the period, its value is returned almost to its original state (if we consider only single cell, or a single organnizm, rather than the entire Universe as a single system). Anyway, for the Laplace demon it would not be difficult. The question is beyond biology or thermodynamics, even beyond the bounds of quantum mechanics, apparently.
https://books.google.com.ua/books?id=9nL7ZX8Djp4C&q=topological+entropy&hl=uk&source=gbs_word_cloud_r&cad=5#v=snippet&q=topological%20entropy&f=false
> Eugene,
Could you give any reference (+ pages if in a book) for this ? I don’t find it in « I. Prigogine. Introduction to Thermodynamics of irreversible processes. Ed. Interscience Publishers, New York, 2nd edition, 1961 » (nor in « I. Prigogine and J.M. Wiame. Biologie et thermodynamique des phénomènes irréversibles. Experientia, II/11 :451-453, November 1946 »).
Dear Guibert. Thanks for the information about your research. They are really relevant here.
I think it will not be difficult to make sure that Von Neumann was right. For this reason, all the approaches to entropy require consideration and analysis.
Ales, I know about this thread, but have less desire to participate in the discussion. I think, Guibert there is close enough to my position. It is obvious from my comments.
To assert that the theory of dynamical systems has nothing to do with the origin of life is practically equivalent to the assertion that mechanics has no involvement in this process.
Dear Guibert,
I enjoyed reading your approach to the question put forward. I will not make a contribution but instead add a question here. Can we not see 'entropy', to be more specific, 'configurational entropy' as an underlying priciple for the 'biological diversity' that we observe? I personaly do not like the conventional definition of 'entropy' given as 'disorder'. Instead, if we see it as 'diversification' in the system during any change, perhaps we can more easily adopt the concept.
> Guibert
Why do cells devide themselves and remain constant sizes? Because when size of cell is greater then some limit, the rate of production of entropy (proportional to volume) becomes greater then the rate of it's escape (proportional to surface area) and balance can not be sustained. Entropy of cell is constant during normal life of cell.
I meant Prigogine's theorems pointing to his name. I don't remember exact references, but You can find this in any textbook on biophysics.
Dear Ali,
Yes indeed I would also rather go in the direction of « configurational entropy » reflecting « diversification in the system during any change » as you put it. I have little time for the present, but will try to give a more elaborate point of view on this issue in another post.
But as Marko and Vasiliy recalled us quoting Von Neumann : “… no one knows what entropy really is …..”. This makes it very difficult to build something coherent applicable to biological diversity and not in contradiction with the acquired knowledge.
Dear Eugene,
Thank you for your explanation. I think that there is still a lot of confusion about entropy evolution in biological systems.
What you are referring to is contained a.o. in an article of Robert Balmer of 1984 :
Robert T. Balmer. An entropy model for biological systems entropy. Chemical Engineering Communications. Vol. 31, 1-6 (1984), p. 145-154
Here is the abstract (underlined by me) :
“The similarity between the entropy balance equation of the Second Law of Thermodynamics and the classical balance equations of chemical reaction kinetics or allometric biological growth models is used to construct an entropy model for biological systems. In this model the entropy transport rate is assumed to be proportional to surface area and the entropy production rate is assumed to be proportional to system volume. It is also hypothesize that, at any time (age), the entropy transport and production rates depend directly upon the instantaneous value of the system total entropy. The resulting entropy rate balance can then be solved uniquely for the total system entropy as a function of time (age)
Also, it is postulated that all living systems are characterized by a continuously, decreasing total entropy level, and that biological death occurs at some minimum total entropy value. The time required to reach this minimum is the lifespan of the biological system.”
First, this model is based on assumptions, see the words : “assumed”, “hypothesize”, “postulated”. Further on, as you also mention in your post, it is spoken of “rates” : entropy transport rate and entropy production rate. What Prigogine has shown is indeed that for open systems near to equilibrium the entropy production rate evolves towards a minimum. And what is reflected in this article is that there is a balance between thermodynamic entropy rates. Although it is a theoretical assumption to allow for computations, this is in line with my view on the steady-state (second stage or stationary stage) corresponding to the true life (80 – 90 %) of say a cell. But it is not the total life from the instant of birth to the instant of death.
Moreover there are other entropies at stake due to : acquisition of information, accumulation of defects etc… I am therefore strongly opposed to the final postulate that “all living systems are characterized by a continuously, decreasing total entropy level, and that biological death occurs at some minimum total entropy value”. On the contrary, collapse or biological death occurs after a third stage where the accumulation of defects has become preponderant and the steady-state cannot be hold further on anymore (with corresponding increase of entropy).
As you put it yourself, the balance in only a rough balance, i.e. between a range allowing sustainability : “…greater than some limit …….balance can not be sustained...”.
You then write that : “Entropy of cell is constant during normal life of cell”. This is not a direct consequence of the balance of thermodynamic entropy rates and is probably not totally true. The entropy could just increase at a slow rate within an allowable range during most of the life because, as Prigogine showed, the rate of production of thermodynamic entropy is going towards a minimum.
This is even clearer from a former article of Balmer :
Robert T. Balmer. Entropy and aging in biological systems. Chemical Engineering Communications. Vol. 17, 1-6 (1982), p. 171-181
Here is the abstract :
“The Second Law of Thermodynamics (which is often referred to as “times arrow” because it dictates the direction of increasing chronological time), should have a direct bearing on the aging process in biological systems. Basic aging related phenomena are discussed from the point of view of an entropy rate balance, and the known effects of body temperature, metabolic rate and food consumption rate on lifespan are considered. The problems of quantifying the rate of internal entropy production of complex systems is investigated using the fundamentals of non-equilibrium thermodynamics. Entropy rate balance data over the lifespan of the annual fish Nothobranchius guentheri are used to illustrate basic aging phenomena”.
The fundamentals of non-equilibrium thermodynamics (Prigogine) are used in an attempt to quantify the rate of internal entropy production of complex systems. What the fundamentals say is that the rate of internal entropy production goes towards a minimum.
Finally, I would like to redirect you to a former post of Aleš Kralj : “Indeed. As Marko pointed out, there are different "entropies". Shannon entropy deals with functional information in organisms. This is the one in the DNA/RNA. If you want entertain calculations of Kolmogorov complexities of a living being, these are tools for you. Shannon entropy of a living being increases through their lifetimes. This is mainly due to copy errors (and other mutations).”
With time going there are copy errors (and other mutations) in the DNA/RNA to which evil influences from environmental stressor agents can be added, also impairing the integrity of these informational molecules (UV radiations, pollution, chemicals, …). So Shannon entropy increases through the lifetimes of organisms.
To my knowledge this is not taken into account in a quantitative model based on balance between thermodynamic entropy rates.
The presence of at least two oppositely directed processes is the only reason for balance. It is important to understand not only in relation to entropy. It is a global principle, it can be called a principle of self-organization, but in fact it covers all processes (from nuclear physics and thermodynamic equilibrium to system counteraction the adaptive landscape, market, etc.)
A some fresh ideas about the subject...
http://physicsworld.com/cws/article/news/2016/oct/18/consciousness-is-tied-to-entropy-say-researchers
Dear Guibert,
What I understand from your earlier post emergence of life itself corresponds to an era during the evolution of the universe where the entropy is increasing exponentially (in anology with creep, the tertiary creep range). Should we not expect then, life forms adopting a similar strategy? Again as I understand your comments, and yes you do refer to the ideas of others, life forms go through a decreasing rate of entropy production. And death comes at some stage where entropy output is minimum. This may be understandable if we consider all life forms as one single system. Because then death seems to increase at least the configurational entropy of the system. However, what happens over the life span of an individual living organism does not seem to be in line with the existing trend of the universe.
I am curious to read your comments on this if, needless to say, what I said bear any meaning at all. Thanks.
Dear Guibert,
thank You for your informationally rich comment. I only want to add, that thermodynamic and Shannon entropy is not the same. It is strong assumption, that they are the same, which is not addopted by everybody. The notion of information can not be strictly defined, especially of it's sense. Problems remain.
Regards,
Eugene.
Dear Ali,
Thank you for your interesting question. It seems that you already have thought a lot on the subject of this thread. For clarity, I’ll try to answer your question in three parts.
>> What I understand from your earlier post emergence of life itself corresponds to an era during the evolution of the universe where the entropy is increasing exponentially (in analogy with creep, the tertiary creep range).
To start with the answer, a remark : the curves shown in my first post (for the evolution of the Universe and for creep) are schematic. Just notice that three stages are given about the same importance in these curves. This is nice to fix ideas. However the curves are seldom like this in practice. The first stage is usually much smaller and the third stage not accelerating so drastically. But the three stages are still there : a first stage with curvature concave from below, a second stage like a straight line and a third stage with curvature convex from below. I give you two examples, an actual creep curve (“creepcurve.png”) and a typical one as computed (“globalageing01.png”).
Because of the three stages, many models have been developed separating the three stages and focusing on the second stage. The first stage is often neglected and the third stage considered to be equivalent to end of life. Only the second stage, the linear one, and the beginning of the third stage are considered of interest as reflecting the “true life”. All these models then refer to specific characteristics of the kind of systems under scope.
For instance one relates the strain rate of creep during the secondary stage with the imposed stress and with a strain hardening coefficient proper to the investigated material. There are also models based on microscopic characteristic of the materials.
For biological entities, one will e.g. model the survival curve in relationship with the entity under scope. Also the mortality curve (mortality rates per 100,000 men in function of age) allows to compare populations (in space and in time) for the 3 stages : (1) infant death rates, (2) minimum mortality rate during most of life and (3) increasing rates for the elderly.
Similarly, in reliability analysis of mechanical devices, the bathtub curve appears to be divided into 3 stages : (1) infant illnesses, (2) actual useful life and (3) end of life. And one focuses on the second stage where the failure rate is considered to be constant at a low level. A typical bathtub curve is joined (“bathtubcurve01.png”). Note that the mortality rates during the second stage of the mortality curve are also seen as minimum and quasi-constant. This would reflect the entropy rate balance during the “true life” as mentioned in a former post by Eugene.
Note that the mortality curve in biology, the bathtub curve in reliability analysis and the strain rate curve in creep have similar shapes. But each one is considered to be proper to its relevant field.
Finally, in cosmology, the standard model (Λ-CDM model) roughly distinguishes 3 stages in the evolution of the scale factor : (1) start and first moments of the expansion (up to 380,000 hours = emission of the Cosmic Microwave Background Radiation = end of radiation-dominated era), (2) deceleration from baryonic and dark matter (matter-dominated era) and (3) acceleration from dark energy. Our 3 stages are back.
This long introduction to be able to answer to the first sentence of your question.
Indeed, it is risky to say that the entropy is increasing exponentially. It is the expansion of the Universe (represented by the scale factor) which is accelerating not the entropy. One would probably say from models of the entropy evolution of the Universe that its entropy is increasing (linearly according to my view). But I doubt that it would accelerate. Thus probably no link with the emergence of life (if I correctly understand your question). Similarly, in the case of creep, the entropy is not increasing exponentially during the tertiary stage : only the strain does.
Now to answer the timing part, I must quickly recall 2 basic equations of my model (already mentioned in a former post) :
E (t) = k.exp(α.t).tβ (1)
With 0≤α
Dear Guibert,
I am truly indebted to you for devoting such an effert to answer my remarks. It was clear enough (don't think that I read it only once, I am not a genuis) that emergence of life did not exactly match the onset of the tertiary stage durşng the evılution of the universe.
I now will kindly ask you if deliberately left the second part of my comment unanswered, or that the answer is encased in your last detailed answer. You are certainly under no obligation to educate me but, I will take the liberty to repeat my request on my earlier comment. Okay, the entropy is not increasing exponentially, but increasing nevertheless. So how come life forms adopted strategies that their entropy output is decreasing until it becomes minimum at death? Or is this my misinterpretation? As I dared to say in my earlier comment, can we regard all living things as one system, to circumvent this problem (the problem of lessening entropy output over the lifespan of an individual organism). Admittedly, this may be only a problem if my understanding on this matter is scewed. I just cannot come to terms with decreasing entropy output when everything seems to degrade as organisms age. In my understanding, entropy is , regardless the subject firld or matter, the stolen or scawenged portion of the energy into things that were not the aim at all. Thank you for bearing with me.
Dear Ali,
This is my answer to the second and third parts of your question.
>> Should we not expect then, life forms adopting a similar strategy? Again as I understand your comments, and yes you do refer to the ideas of others, life forms go through a decreasing rate of entropy production. And death comes at some stage where entropy output is minimum.
Answer :
As I developed in the former post, according to me, life didn’t emerge at a moment corresponding to an exponential acceleration of the entropy of the Universe nor did it during the tertiary stage of the expansion of the Universe (an obvious exponential increase as seen on the curve after the secondary stage). On base of computations, I think that the Universe was still in its secondary stage when life emerged around 3.9 Gyears ago, and is still in its secondary stage now. The fact that we have detected that the expansion was accelerating is only because the Universe has passed the inflexion point (about 7.12 to 7.56 Gyears after the Big Bang) but the part of the curve joining the inflexion point with the point at present time and passing through the point corresponding to the emergence of life still would look like a straight line.
A small remark at this stage of the answer is that I only speak of the “how” complex adaptive systems evolve and age when they are “put into action” under constraints. They follow a 3-stage curve similar to a creep curve. About the “why” they are put into action I usually don’t know. Of course, when I drive a new car or run a new engine for the first time I know that I am at the origin of their “life”. I am the “why”. I know also what happens at the beginning when I start a creep test on a sample. I am running it. But, concerning living entities it becomes already more complicated. Yes, science knows or progressively discovers which are the ins and outs of the origin of bacteria, foetus of mammals, etc…. But “why” they are in life is another question.
The “why” and “how” of life on Earth is discussed here:
https://www.researchgate.net/post/How_was_life_originated/13
So about the “why” of the moment of emergence of life on Earth, I have no answer. I only can say that it should not be due to elements which don’t match with the 3-stage curve. And that the environmental conditions should have started to be propitious on Earth.
Finally, concerning the “why” and “how” of the origin of the Universe, I have no answer, only speculations !
Now you put it correctly that “life forms go through a decreasing rate of entropy production”. But this is not only for life forms. This is also for the “life” of mechanical devices, components subject to creep, etc.
That “death comes at some stage where entropy output is minimum” is not false but could be stated more exactly. The rate of production of entropy reaches a minimum at the instability time ti. Natural death could start as soon as this time. Afterwards, things become probabilistic. The process can continue to follow the same curves or damages can accumulate. After a while which can nevertheless mean 10-20% of life or even more, the integrity of the system is lost and the system collapses (death). During this phase, which is in the tertiary stage, the rate of production of entropy increases strongly because of the accumulation of defects. So, an entropy output at death is no longer minimum in most of the cases.
I join a fictitious example of what I mean (“ageingcurve2.png”): the measured and calculated curves correspond to data for a submarine diesel engine but the time in the abscissa has been changed to match human life. The purpose is didactic in order to compare the curves with the different times used to describe human ageing. So one has : ti = instability time ; tμ=life expectancy ; tr=lifespan of an individual ; tm=longevity.
Thus death at ti or at tm is seldom (very seldom or even not yet for tm for the present but theoretically possible ….) and most people would die in between. However the times tμ and tr would slip to the right with increasing quality of life and care of the elderly. Same for ti for given populations with increasing health at young age.
------------------
>> This may be understandable if we consider all life forms as one single system. Because then death seems to increase at least the configurational entropy of the system. However, what happens over the life span of an individual living organism does not seem to be in line with the existing trend of the universe.
Answer :
So I think that what happens over the life span of an individual living organism is in line with the existing trend of the universe.
Also, we could consider all life forms as one single system. Some hints on this are given in following working paper :
https://www.researchgate.net/publication/305730824_Quantitative_Evolution_of_Species
About death seeming to increase at least the configurational entropy of the system, we first have to define precisely the configurational entropy of the system and the meaning of the words “increase” and “at least” in this context.
I agree with Aleš on the idea that “…entropy is not a major player in thermodynamic of life. It is a consequence of processes thereof not the source of explanation.”
Working Paper Quantitative Evolution of Species
Dear Ali,
I just read your last post and I think that I have answered the second and third parts of your question in line with your expectation. If not, please don’t hesitate to ask again !
In reaction to your last comments, we must agree on the wording. What is decreasing is the rate of entropy production, i.e. the rate at which the entropy is produced. The entropy itself is always increasing even when more slowly. I understood your wording “decreasing entropy output” as meaning “decreasing rate of entropy production”. If it is not the case - as your following sentence seems to reflect (“when everything seems to degrade as organisms age”) - then please adjust.
Everything degrades as organisms age but not at a constant rate. Due to adaptations and positive exchanges with the environment the rate of degradation diminishes up to the instability time ti. Then the rate is prone to increase again. Schematically entropy increases again quicker and quicker due to the accumulation of defects. There is a snowball effect. And collapse of the system (death) occurs soon or later.
Once more as Aleš puts it : “…entropy ….. is a consequence of processes ….. not the source of explanation.”
OK, Guibert, entropy is a consequence, not the source... It is evident.
Dear Ales and others,
any cell's membrane is Maxwell's demon in some sense.
Regards,
Eugene.
Concerning the edge "system-environment (enveloping system)" I would like to write more yesterday, but did not have time to outline sketch. Now I'm on the road. I'll do it when I finish a sketch with a possibility. It is directly related to key idea of the theory of self-organized criticality. This applies not only to the cells.
Incidentally, in automodel process (self-organization) the "source-consequence" relationships not so simple, they can not be clearly differentiated.
Dear Guibert,
I cannot thank you enough for your lucid and elegant answers to my questions. Personally, I have gained a lot. I have indeed found my answers in your detailed comments. I also would like to thank all other followers of the question, which, I think, is a worthy question indeed.
The point you made by saying that "entropy is a consequence not the source" you are right. However, following the entropy changes is the correct path (I am not trying to teach anyone what entropy is... this is just a figure of speaking) to understand the direction of the changes (in other words, to predict or to understand what is/was possible). I just wanted to say this as a final comment on my side (not as an opposition to your approach).
At a different front, may I kindly ask you if you can suggest any reading, if exist, on social entropy (or thermodynamics of societies as systems).
Best regards.
Indeed, the number of cybernetic organizational configurations in complex adaptive systems based on combinations of multiple positive and negative feedback loops of several orders at each increment of time as well as their macroscopic evolution in time is limited in the space of configurations. And I am not sure that the use of "material creep functions" to describe the global behaviour is just metaphorical in all analyzed cases of systems ! I would rather say that creep is a particular case among other cases, maybe experimentally more obvious.
Dear Aleš,
Yes, I agree on the risk of superficial similarities like to refer to nonlinear dynamics and dissipative systems far from equilibrium as a direction in origin of life search. I fully understand your fear that people be fooled into odd beliefs by such similarities.
Concerning similarities with the creep curve, I would just add that one can easily make simple computer simulations using e.g. a system of equations as given below (combining first order positive and negative feedback loops and an equation for the bi). One then finds curves for y(t) and z(t) as shown. The curve for y(t) is a creep-like curve.
To be fully clear, the curve for ξi (t) is the same as for bi (t) : 2 different records in my databank ! Apologies for the mismatch !
Dear Kralj,
I would not dispute that there is no law that prevents a previously untested combination from being tried. NAture itself tries all the time even the combinations that have no chance for survival. Atoms do not have brains.They do not jump in one direction alone when they are in search of a new position. But at the end we human beings need to device some logic, or laws if you like, in order to understand the total final (?) change, and to have some command. Otherwise we would have to surrender to the hands of those claiming divine intervention. So I would still stick to thermodynamics.
I am sorry to see that no suggestions for reading on applicability of thermodynamics to humanities (social changes). I have always thougth it would have been quite entertaining to say the least. And perhaps comforting too, in the face of all the, at least, disagreeable, changes homo sapiens have forcibly exerted, and continueing to exert upon others and on this planet.
Dear Aleš,
Thank you for your question. The physical quantities I wish to assign to these patterns are : (1) the entropy (configurational entropy) related to the internal organization of the system ; (2) the stress or applied load (in a broad sense) ; (3) the temperature. And, of course (4) the available energy to allow for things to happen, to allow for evolution and life.
Now I need to elaborate this a little bit further here to be better understandable (but all details are in my articles, book and working papers on RG).
My proposed approach is indeed a generalized view in the sense that common features are searched in several kinds of systems showing similar macroscopic behaviour.
The 3-stage curve given by E(t) (or y(t) in the former post) is seen as a macroscopic envelope curve resulting from the combination of all positive and negative feedback loops of several orders operating inside the complex adaptive system (CAS) at all levels at each increment of time.
These depend on :
- The internal organization (in the broad sense) of the CAS : Ω (t) (it is fixed in the beginning but it can slightly change during operation without impairing the integrity of the CAS)
- The constraints on the CAS : everything that threatens its integrity as a complex system. This corresponds to loading or stress (in the broad sense) : Σ (t) (it is usually variable in nature but it can be fixed e.g. for experimental purposes)
- The temperature : T (t) (it is usually variable in nature and in function of local conditions inside the system but it can be fixed e.g. for experimental purposes)
In addition, one has the available energy for the system to operate : Є(t) (it is usually variable in nature but still lying around an average value corresponding to the energy necessary for the CAS to function and it can be fixed e.g. for experimental purposes). In creep, in function of the microscopical structure, there are several energies which can play a role in the adaptations : activation energies, stacking fault energy, … For cars, engines etc., one has the energy provided by the fuel. For biological entities, one has the chemical metabolic energy in different forms : ATP molecules, stocked energy in mitochondria (in cells), …. In cosmology, one has the energy density of matter (baryonic and dark) and dark energy which play a role.
Now a few descriptions to claritify the links between the used concepts and CAS.
1) Feedback loops are common feature in nature. From sociology and social contacts down to fundamental physics. By answering your question I’m creating a feedback loop to you (I know you are familiar with all these things, but I think it useful to repeat it here for any potential less informed reader). Also in fundamental physics (Quantum Mechanics, Quantum Field Theories, …) there are feedback loops. When two electrons exchange a photon, there is a feedback loop between them. But it is modelled as an interaction, not as a feedback loop (using equations for feedback loops). When there is a collapse of the wave function, this is due to a feedback loop between the particle and the experimental device measuring its behaviour. This is at least an usual qualitative interpretation as this kind of interaction is not actually taken into account in the models. Feedback loops are also known to be common feature in biology, see e.g. :
https://www.google.be/search?q=%22feedback+loops+in+biology%22&sa=N&biw=1536&bih=770&tbm=isch&tbo=u&source=univ&ved=0ahUKEwij4MiywvXPAhVDExoKHZNmBM44ChCwBAga
https://www.albert.io/blog/positive-negative-feedback-loops-biology/
2) Negative feedback loops correspond to adaptations related to Ω(t). The CAS enjoys an intial basic internal organization although additional skills and organizational features can enrich it and make Ω(t) vary in course of time. The feedback loops are physical and chemical reactions allowing to maintain the integrity of the system, e.g. integrity as a metal for creep (e.g. thanks to restorative effects : dislocations are removed by forming vacancies, …), homeostasis, homeothermy and similar features in biology, reliable operation for cars, engines, etc. etc.….. In general, capacity to operate as a whole. This is done in function of the energy and matter (degraded by chemical reactions, …) available (in open systems, there are flows of energy and matter between CAS and its environment). Negative feedback loops are also information driven. It is because something happens that the negative feedback loop is put into operation. I give the example of the operon lactose in biology in a working paper here :
https://www.researchgate.net/publication/305082366_First_order_negative_feedback_loop_in_relationship_with_lactose_operon
There is an entropy (configurational entropy) associated to Ω(t).
3) Positive feedback loops correspond e.g. to errors which are sources of further errors (because of impairment of information, signals, …), etc., but not only.
4) Constraints Σ(t) appear because the CAS is put into operation. To be subject to creep for a metal means to cope with an imposed load or stress (at a given temperature and under given environmental conditions). To ride for a car means to be constrained in several manners by the driver, the environmental conditions, the quality of fuel, oil, ….To live for a biological entity means to steadily face external (and sometimes also internal) stressor agents which will induce internal stresses. Even the Universe appears to be subject to an internal pressure as a constraint.
5) The temperature T and/or its variations can have a positive or negative influence on the constraints and on the resulting adaptations within the CAS.
There is also a kinetic aspect : adaptations to the constraints must be brought in due time. So errors or small changes to Ω(t) can occur. They can be harmful or harmless, but usually the integrity is maintained during sufficient time to allow to speak of a « true life ». Some changes can even be useful and improve Ω(t).
Then we have (schematically) :
At the beginning of operation, the rate of production of defects/errors (RPO) is higher than the rate of production of adaptations (RPA) but diminishes together with an increasing efficiency of the adaptations : RPO > RPA and RPO ↓(1ary stage – « learning »)
During « true life », there is a balance between the rate of production of defects/errors and the rate of adaptations which allows to maintain the integrity of the CAS : RPO≈RPA (2ary or steady state)
At more advanced age, the rate of production of defects/errors is no longer sufficiently compensated by the rate of adaptations : RPO > RPA and RPO ↑, the rate of production of defects/errors increases up to collapse or death of the CAS (3ary stage – progressively accelerating loss of integrity).
Now, for the kinds of CAS considered, there is often at least one observational parameter which reflects the global evolution/ageing of the system. For instance, in the case of creep of metals, it is the strain. For mechanical devices, it will be the cumulated number of damages. For biological entities, the increase of « odd features », e.g. the intracellular accumulation of human peripheral nerve myelin by monocyte/macrophages, or the bad results at memory tests, the losses of capacities with age or also, in a statistical way, mortality curves, …. Selye’s General Adaptation Syndrome can be analyzed using the approach too, see :
https://www.researchgate.net/publication/305609884_A_system_approach_to_the_General_Adaptation_Syndrome
For the Universe, it appears that the « observational parameter » would just be its expansion as given by the evolution of the scale factor and as resulting from the balance of gravitation (baryonic and dark matter) and dark energy.
Now, I further give some examples on how specific cases can be modelled :
Creep curves can be fitted with E(t). The parameter « k » can often be expressed as :
k=κ.exp(-Q/RT).σb, with « Q » an activation energy and « σ » the applied stress. Then, following equation is easily deduced : lnti=(1/β).(ln(E(ti)/κ) + Q/RT) – (b/β).lnσ which can then be used to build [stress – instability time] bi-logarithmic diagrams for different temperatures, or compared to existing [stress – rupture time] bi-logarithmic diagrams for these temperatures, etc. Extrapolations to actual cases in plants can then be made using these diagrams.
In biology, similar experiments can be performed e.g. to compare the lifes of different types of cells under given stresses at various temperatures. Accelerated cellular senescence can be tested under subcytotoxic stresses at different temperatures. Other uses of the model can be analyzing capacity losses (CL) with age or stress syndromes using equations of the type : CL(t)≈exp(-c.(dE/dt)) with « c » constant specific to the CAS as deduced from tests or measurements.
Mortality curves in biology and bathtub curves in reliability analysis of mechanical devices can be fitted using dE/dt.
Finally, in cosmology, the relationship of the parameters α and β with the energy densities of radiation, matter and dark energy can be analyzed. As an anecdotal remark, I had already considered the possibility of an accelerated expansion of the Universe in an article of 1997. Here is the reference :
G. U. Crevecoeur. A system approach to the Einstein-de Sitter model of expansion of the universe. Bull. Soc. Roy. Sci. Liège, 66(6) :417–433, 1997.
In summary, the precise constraints, organizations (entropies), energies, temperatures depend on the CAS under scope. Till now, I have always encountered this proposed general pattern of evolution / ageing as a general characteristic of CAS evolving under constraints (at given environmental temperature). To my knowledge and experience, the proposed model allows to design experimental protocols and to fit experimental results, for instance : on t2 or ti. It also allows to use measurements for quantitative assessment of the influence of functional parameters specific to the CAS (on t2 or ti for instance).
https://www.google.be/search?q=%22feedback+loops+in+biology%22&sa=N&biw=1536&bih=770&tbm=isch&tbo=u&source=univ&ved=0ahUKEwij4MiywvXPAhVDExoKHZNmBM44ChCwBAga
https://www.albert.io/blog/positive-negative-feedback-loops-biology/
Working Paper First order negative feedback loop in relationship with lactose operon
Working Paper A system approach to the General Adaptation Syndrome
Dear Aleš,
Thank you for your hints and directions for further research. Especially on information and (informational) thermodynamics (although the model doesn’t have direct need of it to give results). However there is clearly path for interesting development about for instance the ins and outs of negative feedback loops in the processes.
You are a deep thinker, aren't you ? (an habit from research, innovation, patenting, thought experiments, … ?).
Concerning point 3 and the two last paragraphs, I find your thesis very interesting and read your recent article put on RG. I must confess that it is partly a new world for me though looking full of promise ! I cannot grasp everything deep in mind for the present. I find your description of chemical evolution in Earth silicate melts as generating a new form of life though fascinating since the first time you wrote on it in RG threads. Note that I have learnt lot of chemistry long time ago but never felt as a chemist deep in heart :=(.
So I have started reading Jarzynski to better understand all this new stuff.
Now, for the time being, it is hard to me to accept that creep would prove to be related to the processes of life regarding configurational thermodynamic entropy and 'accumulation of knowledge'. Although expressions like “local islands of reduced configurational entropy will emerge. This may be viewed as a self-organising process with a cybernetic background. Information is produced and stored there” don’t hurt me at all ! On another side this would give further enlightment on why analogies are found between creep and biological behaviours.
Let's check your last paragraph in more in details. An expert examining material that failed under creep after long period (say a 2.25Cr1Mo-steel after 200 000 – 300 000 hours at 540 °C) would indeed identify the creep failure mode analysing the material remains. There would be some alignments of microholes and resulting microcracks at grain boundaries in the microstructure (perhaps the equivalent of what you call “filament-like microstructure”). However, as a physical metallurgist, I wouldn’t call this chemical evolution nor speak of “information addition”. Rather a physical “metallic evolution” as there are no chemical reactions involved, just bonds between atoms which are broken and replaced by others. The end microstructure is just the result of dislocations sliding and climbing in the crystalline structure (crystal planes move due to the stress), accumulation of dislocations and restoration by vacancy formation (nanotructure). The vacancies then diffuse to the grain boundaries and merge to form microholes at energy wells (microstructure). All physical stuff thus, not chemical. However you might indeed call this “thermal motion”. And it is a self-organising process. But I wouldn’t speak of “noisy search” nor of gain of information in the process. Nevertheless the idea of pre-life conditions existing in creep is interesting and worth stocking in memory. Finally, note that when a 2.25Cr1Mo-steel is subjected to accelerated creep tests at an higher temperature (say 650 °C), you would have rupture in a dozen or a few dozens of hours (instead of 200 000 or 300 000 hours) depending on the applied stress. Then you don’t see anything microstructural in the remains showing creep rupture : the metal likes "as new". But, vacancies are probably already there in the nanostructure (I didn’t check … ).
About the Universe, we need to be modest and patient indeed. Isn’t it thought to be 13.8 Gyrs old (to compare to a cosmologist’s life) ?
Dear Aleš,
Excellent ! Thank you for the clarification.
Indeed “substantially changed temperature of the creep experiment influences residual structure after the failure”.
So, I wouldn’t recommend to anybody to bet with you :=)
We understand it by the fact that there are different temperature thresholds for the activation energies corresponding to the involved processes. So the prevailing process is different at 1400°C, 1000°C, 600°C and 300°C.
And I very much like your hint on “bottom-up” vs. “top-down” approaches !
Dear Ali,
This question of Vasiliy is a really interesting one. It fosters free thinking in several directions. So you asked in a post whether there would exist documentation on the thermodynamics of societies as systems.
In a former post, Aleš proposed literature on thermodynamics applied to economy. I also felt triggered to check for articles on thermodynamics of societies.
As a result, I join some readings found on the Internet.
I then also quickly checked the issue in the frame of a 3-stage evolution approach.
The first result is that we have very poor information for modelling : what would be the constraints (loading, stresses …) and how could they be expressed quantitatively in a way reflecting several different kinds of societies ? What could the key permanent points look like for the description of the internal organization of a society seen as a system ? What would precisely be the available energy for the society to evolve ? How to make observational and/or experimental protocols involving start and end of the society ? …. ?
In spite of this, a society can probably be seen as a complex adaptive system (CAS). And there are plenty of examples of past and recent “societies” which (1) formed, then (2) throve and finally (3) collapsed !
But even at this very basic stage of definition of CAS and life in relationship with a society, one encounters big difficulties.
I remember having taken part to a discussion on RG in connection a.o. with the topic “complex adaptive systems”. I was soon put aside by other participants as my definition of CAS appeared to be incompatible with their definition. This was because for them the designation “complex adaptive system” could only be given to human or ecological groups and systems. A “voluntary” factor was involved. So when I came with cars, cells, creeping components as examples of CAS, it was considered to be nonsense. Fortunately, after hard discussion, we came to a compromise. But unluckily the discussion suddenly ended.
I give you the link to this RG question because the people there looked very much involved in “social groups” and reading their high quality posts could be valuable in connection with your question :
https://www.researchgate.net/post/What_stories_of_how_complex_adaptive_systems_work_have_you_encountered
https://www.researchgate.net/post/What_stories_of_how_complex_adaptive_systems_work_have_you_encountered
Ah, wonderful... Guibert, I will read all these and then perhaps I can make some comments. Thank you so much...
Best wishes and regards
Sorry to interfere in the latest discussion, as promised... Perhaps to someone what is written below may seem distracted from the topic of entropy. I need a skeleton that can hold everything up to organics and human intelligence inclusive.
When it comes to the exchange of information and periodic structures (especially fractals or fractons), questions of symmetry and its violation are relevant. It relates primarily to the construction of the internal model of the environment by dynamical system (in particular by living organism). Ballance on the system-environment edge (percolation critical surface) should be ensured through equilibrium in bidirectional processes between the object (in: provoked perturbation by environment landscape) and its model (out: system response & adaptation of landscape).
This is the simplest key idea of the theory of self-organized criticality. In my opinion, to be blind to manifestations of this ubiquitous process in surrounding world is simply impossible. By the way, the preconditions for its development was before Prigogine:
"Early Scheme for a circular Feedback Circle" from Theoretische Biologie 1920 - https://en.wikipedia.org/wiki/File:Uexk%C3%BClls_schema_original.jpg
Small circular Feedback Pictograms between the Text - https://en.wikipedia.org/wiki/File:Uexk%C3%BClls_schemas_small.jpg
Schematic view of a cycle as an early biocyberneticist - https://en.wikipedia.org/wiki/File:Uexk%C3%BCll_wirkkreis.jpg
We will return to the cycles in the third part.
Part 1: Model. Symmetry. Dynamical Inversion.
The issue of symmetry in the process of evolution is in my mind a long time (some of my notes):
https://plus.google.com/+%D0%92%D0%B0%D1%81%D0%B8%D0%BB%D0%B8%D0%B9%D0%9A%D0%BE%D0%BC%D0%B0%D1%80%D0%BE%D0%B2/posts/3WSF4MUYJeM
Bifurcation on the "axis" of symmetry leads to the formation of two identical objects, life is a glaring example. Everything is so interconnected and striking in its beauty as a living image of standing waves on the surface of vibrating tank with liquid.
After finding all the items it will be possible determine which chain of bifurcations leads to DNA in the form in which it exists in the world. This story is not a small length.
I wonder how this way is deterministic. Can there be, at least, an alternative configuration of the complex interactions between proteins, RNA, DNA? What are the probabilities of a particular way? Or the answer is straightforward, as the number Pi with throwing needles?
https://plus.google.com/+%D0%92%D0%B0%D1%81%D0%B8%D0%BB%D0%B8%D0%B9%D0%9A%D0%BE%D0%BC%D0%B0%D1%80%D0%BE%D0%B2/posts/MgWCU5Cjx5f
Absolutely all high-level attractions (from the interest for inorganic structures to the general processes in economics, etc.) are directly related to the organization level of living matter or to the organization level of the human order intelligence.
Highly organized organics is a necessary party or the catalyst of all processes in which there are high-level attractions.
Banal idea, but for some reason it is realized not easy. Several times I thought about it in the process of comprehension the concept of Attraction.
S.D.: Maybe this is due to the fact that the living in contrast to inanimate seeks to increase the degree of symmetry?
These attractions are the synergy product of appropriate level of organization. This, at least, is associated with cause-effect relationships. For example, in order to perform work (movement of a cargo) any artificial device must be manufactured and provided with energy - man here is the cause of the process, nothing would have happened without him.
Is it all the result of four basic attractions or the process is more complicated? Many factors speaks in favor of the second. But sometimes there are doubts.
Multiverse concept implies a scale where you can localize every attraction and therefore it is also tipping the scales in favor of the second option.
In fact, not every of attraction is possible to locate on the given scale. There will always be such attraction, which is connected to the outside of the open system.
Which attractions are decomposable to simple (additive) components and which are not? (an abnormal associations with primes)
A system that has the property of integrity can not be fully decomposed to such components.
In any case, not yet invented such a detector, which could decompose, for example, the attraction between bisexual life form in the basic attractions.
In addition, in the present case an interesting moment is: the strong and weak interactions can not be localized within each individual, as its cumulative results (inclusive) leads to far interaction between individuals! It's also related to non-locality of processes and entanglement, as it seems.
to S.D.: By the way, in occasion of symmetry-entropy relation there are work to your question ("Before the Big Bang: towards a Monster Moonshine"), related discussion exists on the RG also. Just one moment, it concerns a closed system.
To the question of entanglement in relation to organics I thought of uncanny valley https://en.wikipedia.org/wiki/Uncanny_valley
https://plus.google.com/+%D0%92%D0%B0%D1%81%D0%B8%D0%BB%D0%B8%D0%B9%D0%9A%D0%BE%D0%BC%D0%B0%D1%80%D0%BE%D0%B2/posts/bS1s3hBRFF9
A little more about increasing symmetry (with organics).
Formation of an internal model of the environment [inversion of dynamical system], in fact, is the direct embodiment of this process. This symmetry is not limited within the system. "Interface between" passes through "the surface" that distinguishes part of the system.
In the process of evolution the internal model seeks to approach the outer, simulated system.
Given the fact that this process scale-invariant, increasing symmetry can be seen everywhere, both in the selection of subsystem and the external parts of it - for all coherently evolving structures (subsystems).
It is logical to assume that the "spontaneous" symmetry breaking is not a departure from the "right" optimal state, but on the contrary, there is a alignment of the system to a state of [coherent] equilibrium [ie symmetry] at another scale. [That is the "shedding of the cells" like in the BTW model]
It bothers not only me:
https://www.researchgate.net/publication/286932976_Before_the_Big_Bang_towards_a_Monster_Moonshine
https://www.researchgate.net/publication/303940948_String-Based_Borsuk-Ulam_Theorem
Part 2: Formal knowledge. Goedel.
If we consider the cumulative knowledge of the brain as a system and make the dichotomy along the border of semantics, rejecting concrete information, at the output we obtain the so-called formal system of interrelations.
For the possibility of performing logical operations on this system, the closure of it by terminal axiom-cap is necessary. Otherwise, catastrophe is imminent, provoked by self-reference, which drives the algorithmic process to situation of halt & decision problems. The brain responds to this situation by the only way, the defense mechanisms to eliminate the overload, which is triggered by frustration, are activated in him.
Thus everything that is at odds with the axiomatic foundation of the formal system (in this case, the complete system of human beliefs) is discarded.
Because of historically most popular cap concept this terminal axiom can be called "the will of creator".
From the computational point of view everything, where lack of information, is sent to the null-address. In the system appears parity with elimination of entropy/chaos from the formal system. A similar technique is used, for example, in the numerical simulation of the Navier-Stokes equations, when at arbitrary single vertex of domain boundaries free boundary condition for the pressure retained (for "venting steam" of residual, which crush the system of the linear algebra hard connections). It is possible find many such examples in the theory of algorithms, where the system is wired by strict algebra or logic. By the way, even the writers of the Matrix trilogy gracefully took advantage of this universal excuse, sending Neo to the source, thus they rid themselves necessity of further explanation to the viewer, how in fact events developed before the advent of Neo on stage.
Then for myself I put an equal sign "FAITH = ENTROPY". I think my logic of this statement with the comments above is more clear. After all, in the process of accumulation of information about the physical reality "placeholder" for (faith in) the uncertainty in system becomes smaller. In the system of knowledge place for entropy in its classical sense as a measure of ignorance of information is becoming less in the process of cognition.
In fact, the process is more complicated and stepped because of capsizing (system inversion) in the process of paradigm shift, when hypotheses become axioms for a new belief systems (new levels of faith). In addition, umwelt is constantly growing. Gödel theorem helps to this (https://www.researchgate.net/project/Zero-Energy-Universe-Scenario-ZEUS).
The difference between the Turing machine and quantum computing helps to realize it (after analogy with the Red and White queens, appeared in "Walk through..." notes, for those who understand what I'm saying).
Part 3: Circular orbits. Uncertainty principle. Algorithmical approach to Systems.
In contrast to the abstract objects of string theory I have now more interested in the circular orbits with respect to dynamical systems. For them it is possible to trace an interesting connection with the QM's uncertainty principle, dissipative memory of system and more. Just recently I paid attention to this and to problem of a point concept in mathematics vs reality in the discussion of the article "A simple definition of Time" by Demetris T Christopoulos.
In connection with the earlier noticed regularities (https://www.researchgate.net/publication/308066071_Preconditions_of_space-time_dimensionality) I am interested in the main question:
Is there for every structure, that has the quality of system (ie some inner advantages over linkages with external environment, that provide the stability of the internal configuration of structure) a unique circular orbit? If this is true, then most important link for understanding and decrypting of our phase portrait lies here.
Obviously, the circular orbit can not be reduced to a point. Any circular orbit (except potential infinite) is limited in (phase)space of non-zero (spatial) volume.
The first circular orbit hypothetically should correspond to the Universe as a system. Hence the angular momentum (rotator) of the black hole must be correlated with the speed of light, that practically confirmed by all observations available today (as far as we are able to evaluate its by indirect observations). It is normal for of the scale-invariant process.
Hypothetically, the circular orbit allows us to associate relation of system energy (concentrated in the internal degrees of freedom) with the internal space of the Universe (that is, so-called vacuum).
Any circular orbit of dynamical system at least is bistable-state configuration. In coherence with 1st circular orbit the Nyquist–Shannon sampling limit for quantum interactions is there.
Volume of circular orbit of attractor is directly related to the uncertainty principle (spectral width * pulse duration). This is especially important for oscillation orbits near limiting sampling, that is the object of study of quantum mechanics.
The stable circular orbit algorithmically corresponds to the autonomous state machine. It automatically corresponds to impossibility of exiting from the cycle (halting problem) and long-term preservation of dissipative structure, covered by cycle (long-term memory - it is the nature of all kinds of processes like 1/f).
Decoherence of a cycle in the holistic system probably leads to its destruction (out of coherent evolution). This can occur under the influence of the outside and inside. It is highly scalable and is equally applicable to both the processes of thinking and cybernetics as a whole. These processes must obtain a mechanistic explanation for the construction of a complete model of reality. I think the cyclical orbits of dynamical systems may be allow to bring together everything from quantum mechanics to human intelligence and gravity. No other way.
In fact, the bi-directional process on the edge of criticality for any arbitrary orbit is an abstract analogue of the Turing machine head. Part of the dynamical structure of any scale, highlighted in this manner (which has the quality of the system, relatively isolated within circular orbit), is an abstract calculator.
All this still need to very carefully think through.
https://drive.google.com/drive/u/0/folders/0B-QoJvaNS5VHS21VZ0ljSTdDcDA
Preprint Before the Big Bang: towards a Monster Moonshine
Article String-Based Borsuk-Ulam Theorem
Technical Report Preconditions of space-time dimensionality
While there are lot grounds for thinking, a curious moment - a kind of definition for entropy from the "Liber viginti quattuor philosophorum":
XXIII. Deus est qui sola ignorantia mente cognoscitur.
By the way, there is some truth. The manuscript generally is interesting for the analysis of archetypes. It repeatedly forced me to think about the conceptual things.
Dear Vasiliy,
“Balance on the system-environment edge (percolation critical surface) should be ensured through equilibrium in bidirectional processes between the object (in: provoked perturbation by environment landscape) and its model (out: system response & adaptation of landscape).”
I’m not very familiar with this ecological approach and certainly not an expert of it !
However, in the same rder of ideas you might be interested in following readings (unless you already know them ?) :
1) Predator-Prey model :
http://www.scholarpedia.org/article/Predator-prey_model
2) Maturana and Varela on Autopoiesis :
Maturana, H.R. and Varela, F.J. (1980) Autopoiesis and Cognition - The Realization of the Living. D. Reidel Publishing Company:
[Maturana_Varela_Autopoiesis_and_Cognition_1980.pdf]
3) All books of Fritjof Capra :
https://en.wikipedia.org/wiki/The_Tao_of_Physics
https://en.wikipedia.org/wiki/The_Turning_Point_(book)
http://www.juwing.sp.ru//Capra/CONTENTS.htm
https://www.youtube.com/watch?v=TLiRXM2oZ_U
https://en.wikipedia.org/wiki/The_Hidden_Connections
https://searchworks.stanford.edu/view/10474020
[On_The systems view of life.pdf]
http://www.scholarpedia.org/article/Predator-prey_model
https://en.wikipedia.org/wiki/The_Tao_of_Physics
https://en.wikipedia.org/wiki/The_Turning_Point_(book)
http://www.juwing.sp.ru//Capra/CONTENTS.htm
https://www.youtube.com/watch?v=TLiRXM2oZ_U
https://en.wikipedia.org/wiki/The_Hidden_Connections
https://searchworks.stanford.edu/view/10474020
Colin, why need this analogy? Almost any function can be interpolated by piecewise smooth function. Otherwise, we could not even use the numerical methods on computers. Numerical simulation on discrete computers always passes through the linear algebra.
I mean smooth at least in piecewise intervals function, although this is not important, I still do not understand meaning of the question.
Biological populations may be changed intermittently only by accident, when the landscape changes drastically to the population. In case models of interest no external factors (from a meteorite or epidemic to a huge group of predators, which came from the outside) involved (if the virus, that causes the epidemic, is not a predator in the model, of course). So, the speed of change is linked to current populations. We can say, every moment it have limitation (as function of populations). This dynamic system always has a smooth trend, if not taken into account extraneous (more rapid) factors, also system has a long memory signs, which is manifested in the form of oscillatory process. I do not see sense in analogy between "function, graphed as a saw tooth" and model of predator and prey.
Another link to (may be) interesting overview of the entropy term.
Article Entropy - A Guide for the Perplexed
Some research in the light of the discussion: "Equal fitness paradigm explained by a trade-off between generation time and energy production rate" ( https://doi.org/10.1038/s41559-017-0430-1 )
Entropy: A concept that is not a physical quantity
https://www.researchgate.net/publication/230554936_Entropy_A_concept_that_is_not_a_physical_quantity
Comparison of New and Old Thermodynamics
1. Logic of the Second Law of Thermodynamics: Subjectivism, Logical Jump, Interdisciplinary Argumentation.
2. New thermodynamics pursues universality, two theoretical cornerstones:
2.1 Boltzmann formula: ro=A*exp(-Mgh/RT) - Isotope centrifugal separation experiments show that it is suitable for gases and liquids.
2.2. Hydrostatic equilibrium: applicable to gases and liquids.
3. The second and third sonic virial coefficients of R143a derived from the new thermodynamics are in agreement with the experimental results.
3.1. The third velocity Virial coefficient derived is in agreement with the experimental data, which shows that the theory is still correct when the critical density is reached.
4. See Appendix Pictures and Documents for details.
1,The second law of thermodynamics is incorrect. See figure below for details.
2,The system is isothermal and exchanges heat with a large heat source to keep the temperature constant. Only volume changes are discussed.
3,The problem here is that the actual system is balanced and the second law of thermodynamics judges it to be unbalanced.