Newton introduced differential equations to physics, some 200 years ago. Later Maxwell added his own set. We also have Navier-Stokes equation, and of course - Schroedinger equation. All they were big steps in science, no doubts. But I feel uneasy, when I see, for example in thermodynamics,
differentiation with respect to the (discrete!) number of particles. That's clear abuse of a beautiful and well established mathematical concept - yet nobody complains or even raises this question. Our world seems discrete (look at STM images if you don't like XIX-th century Dalton's law), so perhaps we need some other mathematical tool(s) to describe it correctly? Maybe graph theory?
One of my math professors used to dream about the universal differential equation that would describe everything in the universe. This notion is of course utterly ridicoulus, as present modern theories gives us computable limits on predictability. But the Liebnitz ideas die slowly.
Math is in my opinion best wieved as a modeling language that enables us to make models of reality which allow us to extrapolate and make predictions in the physical world. The models have to be simple enough to make a mathematical solution possible, and this in turn sets limits on how far the model can extrapolate known physical results.
Wether a differential equation is the best model depends on the real world phenomena that it is going to model, It is going to describe reality with varying accuracy, but never fully. But it may or may not exactly describe the simplified description of reality made. Sometimes other models are better descriptions of reality. For large ensembles of particles, such as gases, differential equations are quite good. The 3 body problem on the other hand is a classical example of a seemingly simple problem that is very dificult to solve.
Surely, reality is not the same thing as its mathematical model, which is usually simplified and has only to catch as many of observed features as possible. In this spirit I can live (not quite comfortably, but nevertheless) with approximation of Gaussian distribution for elementary particle mass, having support extending up to minus infinity. But differentiating with respect to natural numbers, no matter how big they are, is a different thing. It is even worse than what the great mathematician had to say about computer-generated random numbers:
Any one who considers arithmetical methods of
producing random digits is, of course, in a state
of sin.
(-) John von Neumann, 1951
Many other "big names" were amazed how well the mathematics describes our world.
Today I would add: the above seem to apply to abused mathematics, too. Now, this is strange, and, strictly speaking - unacceptable.
A good question and a good answer invite a contribution. Once I read that Carl Runge held the idea that difference equations are deeper rooted in physics than differential equations and that the proper understanding of the latter is in terms of fine-grained difference equations. Unfortunately I lost the reference. My own thinking moved into the same direction and was reinforced by the observation that there is a slight variation of the simplistic first order explicit Euler method which is second order, reversible, and surprisingly robust. Let me take the occasion to write it down:
ODE: x'(t) = f(x(t),t), x(0) = x0; solution: initialization v := f(x0,0), h:=dt/2 general step:
t+=h, x+=hv, v=2f(x,t)-v, x+=hv, t+=h. I call this the asynchronous leapfrog method and it's described in the attached link and, in many applications on my web page www.ulrichmutze.de. For all practical purposes the associated time stepping scheme (a discrete dynamical system - to have a nice word) is equivalent to the ODE (also PDE by the 'method of lines'). As Andeas already suggested: discrete math with variable scale is the modern form of continuous math. We then are much closer to Euler, Gauss (and Runge?) than it was considered proper during my university education. Unfortunately it seems not to be possible to correct my mistyped link in place. So I add the correct form here:
http://www.ma.utexas.edu/mp_arc/c/08/08-197.pdf
http://www.ma.utexas.edu/mp_arc/c/08-197.pdf
Famous Russian mathematician Vladimir Arnold gave the following example. Consider equation dy/dt=-y, its solution forms a family y(t;C)=C exp(-t). Mathematicians know that two different solutions y(t;C1) and y(t;C2) never intersects. Nevertheless, when t=10 all this solutions coincide, since it is impossible to put an atom between two curves. Differential equations are just approximation, because in the real world we always deal with the finite differences. The differential equations give a convinient way to manipulate with these finite differences.
OK, I agree that the finite difference methods are able, at least in principle, to approximate arbitrarily well the true solutions of differential equations, assuming exact calculations, with no rounding errors. But this fact doesn't imply the reverse situation is also true. More precisely: Nature acts by its own "time-steps", not necessarily all equal to each other, but somehow fixed for a given process happening only once. I don't see a good reason why the smooth, differential equation should reconstruct such a situation "arbitrarily well". Think, for example, of a classical Brownian motion (a single, well defined trajectory), treated with full mechanical rigor, without resorting to statistical description.
@Marek: 1. Where can I find v. Neuman's nice ideas concerning computer-generated random numbers ? With all the congruence methods I have seen I would say the sin is lack of phantasy. But there are good and natural methods too. See the one in the link.
2. In your next contribution (which I'm unable to see unfortunately at present) you say that you can't make sense of the 'reverse situation'. Although you give an explanation I don't understand the open issue. Would you mind reformulating your thought?
http://demonstrations.wolfram.com/ TestingAnIntuitiveRandomGeneratorWithTheTaskOfEmptyingACube/
@Ulrich: 1. Unfortunately I am not able to point you to the v. Neuman's papers, sorry. Computer-generated random numbers - this is the subject in its own rights, not for short exchange of ideas. Enough to say that almost any computer routine is fine when you need some 1000 random numbers. When you need more (>10^8) then congruence methods start to produce periodic sequences, thus NOT random. Well, if the period is 10^620 then we are happy, but this is not possible when we are working with 32 integers (integers are essential for "transportability" of such software between different machines, operating systems, etc.)
2. Maybe another example will be more transparent for you. Think about (simple, one-step) radioactive decay: here we are solving (analytically, not numerically!) the ordinary differential equation:
dN/dt = - const*N,
where N is an *integer* number. Are you happy with this situation? Can you justify it without distorting the meaning of N - the number of (unstable) nuclei still present in sample?
@Marek: 1. If you are in practical need for a random generator with period > 10^620, let me know and I'll send you one (is C code OK?).
2. This is a self-made difficulty with which I would not be happy either. N is the expectation value of an integer-valued random variable and - as such- real valued. If N is small, ultimately < 1, this becomes very clear. Only in the initial state (t=0) it may be reasonable to assume N integer-valued. If N is large (as it probably is in most applications) it makes however no real difference whether N is treated as integer or not. Dealing with a related problem Feynman writes (Quantum Mechanics and Path Integrals, Dover Second edition 2005, p 94) 'The physicist cannot understand the mathematician's care in solving an idealized physical problem. The physicist knows the real problem is much more complicated. It has already been simplified by intuition, which discards the unimportant and often approximates the remainder.'
@Gunvant: Is it because we are (at least vast majority of us) quite familiar with DE, just used to them? Or, our friends are, and are kind enough to give us a piece of advice when problems arise? If that's the only justification of your rather categorical statement, then it is purely subjective opinion having no solid mathematical grounds. All is fine with Maxwell equations, but as my counterexample - Brownian motions - shows, DE are *only* applicable between successive collisions. The total time of collisions is (ideally) zero, yet Brownian trajectory is even qualitatively a different thing than, say, Earth's orbit. Therefore I would append your statement with single word like "often" or "sometimes", whichever you like more.
Besides: our exchange of ideas is reality. Could you present, please, a differential equation describing it? Maybe some other DE describing exhaustively any chessboard-based game?
@Ulrich: When you replace integer N with its expectation value, then you are describing completely different experiment, the one involving many samples not just one. This is exactly what I meant when I mentioned I don't want to "distort" the original meaning of N. This reminds me an old statistical joke: my dog and I myself have 6 legs altogether, that is you may expect that I have, by average, 3 legs. Even if formally (statistically) correct, does this number make sense when applied (related) to any individual event, that is to you (or me, if you prefer)?
As to the random number generator: thank you very much for your kind offer, but I'm happy with those I already have and, even more important, I stopped using them. I recently switched to fully deterministic interval computations, even for statistical purposes (yes!).
I don't object using DE whenever they are applicable. My problem is rather that DE are often "pushed to the limits" in hope for correct results, when they (DE) clearly do not apply, as in radioactive decay, for example. And, quite often, the results are indeed amazingly good. But isn't it only accidental? Probably. Otherwise I would be very rich, if DEs were able to predict stock market events, at least approximately (in Feynmann spirit).
@Marek: I disagree loudly (do you hear me?). Radioactive decay is known to be a stochasic process. So, nobody in his right mind should interprete the decay equation for N as describing (actually predicting) the fate of a single sample. Instead, we embed the single sample in a (virtual) ensemble on which a stochasic process (this time in the technical formalized sense of the word) takes place. This process makes N a random variable with an expectation value given by your differential equation. This is the best statistics can do about the fate of a single radioactive sample. Interestingly David Bohm's quantum mechanics allows to build models in which radioactive decay is deterministic so that you get a genuine differential equation for N(t). Here, again N is not integer-valued since the decay fragment leaves the nucleus in a continuous motion which opens the possibility that a nucleus is half desintegrated or only 1.37 %.
One of my math professors used to dream about the universal differential equation that would describe everything in the universe. This notion is of course utterly ridicoulus, as present modern theories gives us computable limits on predictability. But the Liebnitz ideas die slowly.
Math is in my opinion best wieved as a modeling language that enables us to make models of reality which allow us to extrapolate and make predictions in the physical world. The models have to be simple enough to make a mathematical solution possible, and this in turn sets limits on how far the model can extrapolate known physical results.
Wether a differential equation is the best model depends on the real world phenomena that it is going to model, It is going to describe reality with varying accuracy, but never fully. But it may or may not exactly describe the simplified description of reality made. Sometimes other models are better descriptions of reality. For large ensembles of particles, such as gases, differential equations are quite good. The 3 body problem on the other hand is a classical example of a seemingly simple problem that is very dificult to solve.
One approach to treat radioactive decay is to view it with the help of Poisson processes. Then indeed the number N(t) of particles at time t is an integer valued random number. From physical experiments one obtains an estimate for the intensity of radioactive decay. The expectation of number of particles then relates to a differential equation which describes the behavior on average. By the strong law of large numbers, this means a sufficiently high number of experiments yield this average number of particles at time t.
Modern mathematical finance greatly explores this link between stochastic processes and its (conditional) expectation by means of the Feynman-Kac theorem. Of course, the statistical estimation is far from trivial.
So far, yes, as there is no other option to describe continuous phenomena. We can, however, investigate the accuracy and error estimation of differential equations to the extent required by the context.
Yes, to a certain extend with acceptable accuracy, differentials are the best way to describe the real-word scenario, from a laser modulation phenomena to an electromagnetic propagation characteristic, obviously with some inequalities.
Yes, to the maximum extent. The latest developments in the theory of fractional differential equations make it still suitable to study the real world problems
The reality are dinamic, differential equation are using to express dinamic motion. A lot of thermonidamic final formula are written as non differential equation, but to obtain it you have to used differentian equation. Do not confuse.
When mathematical modeling is used to describe physical, biological or chemical
Phenomena, one of the most common results is either a differential equation or
a system of differential equations, together with appropriate boundary and initial
conditions.hence we can say that D.E can represent reality but there can be other tools to repersent reality but D.E's are on of those tools
Yes i agree with the question asked differential equations explain infinitely many ways of solutions including general solution so it acts as a tool to describe reality which is giving instant solutions in physics , scientific and engineering difficult problem solving techniques differential equations playing vital role....N V NAGENDRAM
Obviously there is general agreement on the primary question. I would say that differential equations may well be called a proper tool but not the 'basic, primary, canonical, ....' tool.The latter, no doubt, are the variation principles from which the differential equations can easily be derived. Having arrived at this derived level, there is in most cases a system of integral equations which can be considered equally natural or unnatural as the system of differential equations. Sommerfeld held this view with respect to Maxwell's equations. So, we see there are options.
Hi, I am interested in mechanics of heterogeneous (structured, hierarchic etc.) media. Two approaches are used to describe the dynamics - one based upon the continual formalizm (in the form of PDEs) and the other based on the so called element dynamics ( I, personally, work within the firs one). And I heared from my colleagues that the second approach sometimes describe the phenomena which cannot be interpreted in principle within the continual approach. Maybe one of them is the behavior of the compacton within the model of pre-stressed (Herzian) spheres.
Fundamentally there are two kinds of ways to describe some kind of natural process either in terms of a differential equation or as an integral equation (also a possible third, the mixture of these two). For example when writing down the conservation laws that govern fluid dynamics they are derived and written down in integral form before they are constructed a s differential equations.
Differential equations arise because you want to describe how one physical property changes with regard to another, typically say how the velocity changes with respect to time or how the strength of the magnetic field changes in space. This change then comes about due to something which drives this change, in Newtonian classical mechanics some sort of force. The standard paradigm says that any change occurs only locally so that we need a differential equation which relates the local change in one property with respect to the local change in another. This is why differential equations are so prominent. In some cases non-local effects produce local changes as well, in this case integral equations are often used. Remarkably, nature, at least to some degree, works in this way so that it seems that local changes produce other local changes. This is the strength of reductionism in science.
Your question asks about having differential where we are talking about something discrete. Naturally you cannot write down a differential equation for some entity which is purely discrete, it must be a continuous variable. If it is possible to somehow define a discrete variable in a continuous sense then a differential equations makes sense.
In addition, the world does appear discrete in many ways but to some degree these discrete appearances may be derived from a continuous process. In other instances where we cannot do this, the discrete process which actually takes place, e.g. the motion of atoms within a fluid, may be approximated by a continuous process as long as we do not look too closely. This means not looking at the length scale at which the discreteness becomes obvious.
Hmmm. We are used to differential equations, we like them very much, and we are going to use them for every problem, up to the more or less obvious abuse. Yet the discrete "things" are inherently different than continuous ones. Does this mean that the famous Traveling Salesman Problem (TSP) will be formulated some day as a differential equation? Nowadays the various network-related ideas are intensively investigated, leading to NP-hard combinatorial description. I don't think we will be able to answer the questions like: given all the published papers by all authors from a certain institute, can we say who is the best scientist there? Or just the raising star? DE don't look helpful for such problems and we need completely different thinking, different approach and different tools, maybe not invented yet. For me, the situation seems to resemble the times when chaotic phenomena were simply non-existing even in prominent minds. Well, those were finally harnessed with ... DE, to the great extent. Anyway, the real progress was made only many years later after creation of Einstein-Smoluchowski equation, namely when fractals and Lorentz strange attractor were brought to life.
Let us see. We have domains (space and/or time) that can be continuous, discrete or mixed). Similarly we have systems described by differential equations and other described by difference equations. I must enhance that this approach orginates most of the important realizations of our daily life and are suitable for describing a lot of natural phenomena. In the last 20 years they have been studied togther in the so called "Dynamic equations on time scales (or measure chains)". On the other hand the integrodifferential equations used to model some long memory systems have been generalised by the fractional differential equations. As Prof. Nishimoto claims, fractional calculus will be the calculus of the XXIth century. The fractional equations are useful tool to create models for long range processes with power law autocorrelations.From the spectral point of view they are suitable for dealing with signals with spectra that do not increase/decrease by multiples of 20 dB/decade. For example, the music of an orchestra changes around 6 dB/decade. The same happens in other important signals: ECG, EEG, internet trafic, and so on. To finish I would remember some systems that deserve some work in finding good models: the birds and insects when living in great colonies.
It seems to me that a lot of people is tied to the continuous time/space. This can limit the perspective. The world of Signals and Systems is very wide.
It seems to me the problem is rather complex. Differential equations dominate physics from classical to quantum. First of all they give a problem in the " continuum interpretation". May I suggest to give a look at my last publication appeared on Neuroquantology few days ago antitled "What is the reason to use Clifford algebra in quantum Cognition? "it from qubit" .... ".? The other basic reason is the determinism . As first outlined from M. Zak and from J.P. Zbilut and from a unmber of papers published by me and Zbilut ( visit the site www.saistmp.com to find some of such papers and in particular read "On the possibility that we think in a quantum probabilistic manner" special issue of Neuroquantology dedicated to some recent results of prof. Elio Conte) , in differential equations we impose from the outside Lipschitz condition ,thus unique solution and thus determinism. Instead Zak discussed in detail so much examples in physics as well as Zbilut and I discovered more and more case in which Lipschitz condition is violated. Still there is the problem of deterministic chaos and of the new form of choas arising when Lipischitz condition is violated.
So. Is determinism one basic rule in Nature?. Cordially. Elio Conte
Calculus is continuous in derivatives and integrals. Let's not confuse integration with the Simpson Rule.
It was a century before. Read a good book on Signals and Systems, a good book on calculus on time scales, a good book on discrete systems, etc. The simpson rule is one of a lot of discretization techniques (not the more interesting and useful).
I understand your (an mainstream) argument, I guess my fall-back is group/ring theory and Lie algebra in particular. I am keeping an open mid and I may change my opinion in the future. Quantum mechanics is discrete and and calculus woks just fine in QM. However, for now I am remaining a purist in this sense and hold to the idea that calculus is continuous.
Regarding the previous comments, calculus is of course mainly continuous and largely linear as well in most cases applicable to real world problems. Exactly how calculus is defined seems to be a somewhat arbitrary definition that is fluid over time. But Regarding the original question whether differential equations are the best models of reality, I have always believed that discrete systems on a smaller statistical scale where definitely out bounds for classical calculus and differential equations and therefore better handled by other methods. The Feynman-Kac theorem that Thorsten Schmidt mentioned previously is very interesting because it is an example that extends the utility of differential equations into the domain of stochastic processes. Thanks for that pointer, it made me look in to that field of which I had been to ignorant. It is certainly useful for many more areas of science than stock market predictions . (Feynman sums over history arguments, obviously) I found the wikipedia introduction and links very interesting: http://en.wikipedia.org/wiki/Feynman%E2%80%93Kac_formula
A somewhat philosophical discussion but not lacking relevance to the posters question:
http://en.wikipedia.org/wiki/Map-territory_relation
But it is still important to remember that a continuous model of a processes over time can only predict a real world process so far into the future. Then Statistical methods are needed again.
Marek, we have proposed such a fundamentally new scientific language ( for introduction see http://www.cs.unb.ca/~goldfarb/BOOK.pdf ).
But it is much more radical than you or other physicists have anticipated. ;--)
Lev,
Your BOOK, far from final form, looks intriguing at least. Maybe you are on the right track. I will need some time to read it, together with your other papers. Seems that we both share similar spiritual anxiety about this subject. The difference is that I am only worried and you are trying to be constructive. Great thank for your input!
Marek,
Trained as a (pure) mathematician in the Soviet Union and starting my PhD in pattern recognition in Canada, I had to face the representational issue relatively early in my professional life. Gradually, it came as a *great* surprise to me that modern mathematics cannot offer anything for *structural object representation*. After that, it took more than two decades to come to this new formal language ETS we have proposed.
Lev,
no wonder you are far ahead of me. I'm only an experimental physicists (in magnetism) with strong inclinations to math which brings kind of order to our world. This is why my soul is uneasy when I see obvious overuse of otherwise great mathematical concepts.
Marek,
But it has not been easy going, to put it mildly. Almost daily I have had to (and still do) check my sanity. ;--)
The approach to treat radioactive decay by a traditional model , remains a model. A rather macrosocpic model ...a bridge between the reality and our manner to structure it. A model is an approximaton . Models give us important informations if well formulated. They have also another important feature. They enable us to have important infromations about structures that of course are forbidden to our direct observation. Take a marked radioisotope and let us assume that we intend to study its kinetics in man. We may realize a compartment model. This is agreat advantage. We may measure only its disappearance from the blood pool and have important informations about its kinetics in the bone , soft tissues , elimination and so on. However, as any model it will suffer of drastic approximations that , case by case , will be accepted or not at the final level of the experimentation depending on the intrinsic features of the experiment itself. Also theories , in some sense, are model and thus suffering of drastic approximations but this is another matter. Here we have basic foundations . They result more articulated about basic principles and features of our reality. Relating radioactive decay , we have quantum mechanics and , in particular in this our case, the long list of unstable systems. Theory enables us to estimate with accuracy the probability of decay or the system to remain undecayded ... but some founding principles are necessary. A quantum system needs to be observed in order to acertain if it is in the decayed or undecayed state and before direct observation it is in a potential superposition of such alternatives. The picture is changed . Now , no more we have a rough representative model.. we have a " model" of reality based on fundamental principles ........ Quantum mechanics refuses an ingenous vision of realism that refuses the suspension of judgement and does not make the external reality to depend from the human observation.And in fact by this way we discover as example that, under suitable cobditions of observations, a kind of Zeno paradox may happen (see as example the list of my publications on this gate).Reality is more complex and responds to basic founding principles. Take an example. Ammonia has a structure of atriangular pyramid with the nitrogen negatively charged and the hydrogens positively charged. As a consequence , we often say that ammonia shows an electric dipole moment that is negative toward the apex of the pyramid. However, we should be correct to state precisely that starts out from the above asymmetrical state where it will not remain for a very long time . By quantum rules the nitrogen can turn the pyramid inside out very rapidly and this inversion pocess occurs at a frequency of about 3x10^10 seconds. The truly "stationary" state is actually an equal superposition of the "asymmetrical pyramid" and its inverse , leading to an average symmetrical state. This is another advantage to use theory. ... the basic reason for the repeating inversion of ammonia is that the state of the system , if it is to be stationary, must always conserve the same symmetry as the laws of motions thayt govern it. This is more advanced knowledge and it arises from the theory. Not a model!
Diffy Q's are certainly not THE proper tool to describe reality, but they certainly are A useful tool to do so except (of course) when they are not.
The beauty and delight of science is that we are never fully in possession of reality and we will certainly perish as species in next billion years without ever having kenned the entire story and reducing it to one elegant description from the subatomic to the cosmos, or even N sub-descriptions.
It's interesting that Marek uses a STM image as a "real world" reference for which mathematical tools must be fashioned. There is no STM "image" as such it is entirely constructed and boundaries between objects created by algorithms running in the Visual Display software and operating on an x,y,z data set also created by another software module when the scan took place. Understand it is the Marketing Department not the physicists who chose the parameters to obtain esthetically pleasing images. In the whole not much damage is done unless one is deceived that displayed atom is actually the same as the scanned atom.
To make it clear we are blind men able to use tunnel currents to define a map of such tunnel values to which we assign atomic boundaries, from which we make entirely by arbitrary rules a pleasant image which confirms our model and theory.
Like Dark Energy and Dark Matter we may well be looking for subjects which are merely the result of our naming. A Dark Matter particle, a genuine measurable Dark Energy are at least as likely to be just the results of the real properties of Space/Time deformation - Gravity as discrete and measurable stuff. Mathematics may fool us as well as reveal new measurable elements of reality.
I agree quite completely. D.E. represent a very important mathematical tool to describe time evolution of systems. Maxwell equations are among the most excellent and complete formulatuion of physics. Diffusion equations, Schrodinger equations ... just to consider some trivial examples. There is only a limit. We impose from the outside determinism when admitting Lipschitz condition. Instead there are several systems in Nature violating Lipschitz condition. This is the only reservation but of course D.E. are excellent instruments. Also we have to remember the discrete versions. It is true : they have not large use in physics currently but also I see their great relevance.
Vic,
thank you for your very original answer. You've almost scared me. Yes, the STM images are to some extent "artificial" objects, but - as I'm told by biophysicists - our own vision is in fact discrete. Indeed, it is our brain which produces continuous shapes from those pixels. Can we trust such an image? We have to: try to drive a car neglecting the traffic lights and see what happens. We KNOW our brain sometimes fools us creating "impossible images" or otherwise false impressions. I hope the mathematics is different, well done math should never fool us.
Yes, yes ... all we should agree. D.E have a great role. Problems are often when obtaining solutions.
Perhaps I should formulate my question in a different way. Differential equations have many advantages, especially where they are strictly and obviously applicable. Yet, their origin may be traced to the Newton, who tried to produce a model of mechanical world and, in a sense, invented this unquestionably beautiful and effective tool. But there are phenomena which may be conveniently viewed as sets of interacting or at least connected objects. Graph theory seems much more appropriate for such cases like power grid, river systems, terrorists organizations, business cooperation, ecosystems and many more. Surely, the "flux" between any two connected objects may often be described by DE, but DE alone are unable to describe exhaustively such a system as a whole. This is to say that maybe our today's vision of the world is excessively dominated by DE? Other mathematical tools seem necessary and required too.
you are right. You pose a very difficult problem. It is difficult to answer also because , as certainly you know, we are able to give analytical solutions of DE only a limited nbumber of cases. I think that this is also a reason. Of course if you acknowledge an existing DE describing time evolution of a given process , certainly you at the basis do the assumption that this process is regulated from a detailed law and in addition , you postulate continuous time evolution that I discuss in detail in my recent article on Neuroquantology.. What is the reason to use Clifford algebra ..... that is in my list. You certainly know about the contonuous devlopments about time regime also in the framework of fractal and chaotic theories. I do not know. My impression is that any tentative to dismiss DE may possibly force us to enter in a rather too phenomenological regime of description. Still. We have to establish the level of application. Quantyum or classical? In my paper recently published on Advanced Studies in Theoretical physics ( see the list of my publications) we find that classical diffusion equation and Schrodinger equation have a common algebraic origin but drastically bifurcate at same point. So we haveto discriminate with accuracy the level of reality ... that we speak.... I am convinced of one thing. If we speak about quantum reality I have so much possibility that Schrodinger equation is correct ... but we are forced to admit a new profile of reality... a picture in which human cognition, semantic performances , logic have a contextual role in the dynamics of the matter.
Mathematics is the language in which the nature speaks. But, there are limitations of the validity of Mathematics. Strange but true. May be it is yet-to-uncover-the-fact issue.
So, likewise, diff. eqns. are no doubts well-established tools, but yes lots to research till now.
@Marek: As far as I see, what you desire is under work presently at many sites. Discrete formulations of dynamical laws in computational electrodynamics, lattice quantum chromodynamics, molecular dynamics are no longer fixed to the dull main theme of numerical analysis to immitate the continuum but found discrete formulations that could stand for their own as natural laws. There are many creative ways to take differential equations as laws for the evolution of graphs. When Euler discussed differential equations via his polygon method he pioneered this view. As far as I remember, Carl Runge (from Runge-Kutta) considered his numerical scheme as more indicative for the real content of a differential equation than the definition of his narrower minded mathematical colleques. The very flexible scheme that attaches categories to knots and functors to edges of graphs allows to model also differential manifolds and also genuine discrete structure. The main problem today seems to be that so many approaches are under work that it is hard to distill methods for wide-spread use out of them. Don't loose hope!
Dear Ulrich Mutze ,...... you say : As far as I see, what you desire is under work presently at many sites. Sorry , I do not understand. May be this comment is devoted to me? Please, explain in detail and let me know!
Dear Elio, as indicated my comment refers to Marek's previous contribution ending in "Other mathematical tools seem necessary and required too." (What I referred to was the "desire" to have these "required tools" available.)
Ulrich , thank you. You are certainly right.. Quantum mechanics started , as you know, in 1927 and also to day the physicists that have in great consideration the future of this discipline, continue to debate about the most appropriate mathematical tools to describe quantum reality. I realized what we could call a bare bone skeleton of quantum mechanics using only Clifford algebra , just to give an example. Continuous advances are certainly required but my impression is that DE represent in any manner a basic tool . Our aim is to describe always time evolution of systems and our paradigmatic - conceptual structure about determinism , causality, indetermination seems to pertain actually to our reality. Consequently my modest opiniuon is that DE will continue to represent a basic reference in our theoretical elaborations.
Elio, your question is interesting... For different centuries, we tried to describe the nature and its phenomena using mathematical models. Recently, some phenomena (e.g., diffusion limited aggregation, and othes) can be modeled using fractal geometry. Improving our analysis tools, the representative models were improved .
No doubt that you are right. The way is just of mathematical models looking in particular to complexity and including in particular chaos and fractal geometry. Also in medicne it is happening the same thing. Some my students use books entitleld Fractal Physiology. This is not a little advance.
Dear Elio, fractal geometry represents a way to model natural shapes (e.g. L-systems used by Aristid Lindenmayer to model trees and algae). Dioguardi used a fractal approach in biopsy. Only attempt for reducing the complexity presents in nature.
Johan Gielis proposed his superformula as a generalization of the superellipse for representing natural shapes (information at: http://paulbourke.net/geometry/supershape/ ). Mathematics is a tool for understanding nature, its shapes and its phenomena.
You are right. Fractal as well as chaotic regimes are ubiquitous in Nature. Think to biomedical signals. Non linear control mechanisms induce dynamics of this kind. R-R signal, ECG as well as EEG follow such dynamics. Still think to the importance at diagnostic level. DFA as example is a very important method in this field. It is also valuable under the predictive profile , as example in congestive heart failure. . Still we have predictive in cases of Ventricular fibrillation. In Neurologic cases we have prediction of epilectic seizures. In regulation of cardiac rhythm we are experiencing this kind of control signals. In brain entrainment fractal performance is just in use in some cases of psychological disorders. In case of electric stimulation there are in progress studies by chaotic signals. The field of applications is very large.
Hello,
Actually I am working with the differential equations (DE) and I have many years experience in solving DEs analytically and numerically. May be the things that I will share here seems trivial, but please let me try.
Here is no contradiction, the DE is a limit case of some kind of balance that is strangely similar to accounting balance (probably that is our way to thinking). When you construct the DE you must begin with the balance (for example in the case of the propagation equation) with the balance of energy that come in to the little volume and the energy that go out of this volume, if you thinking that the phase is not playing the important role, for simplicity. Than if the amount of the photons is large you can use some average variables, like the intensity or power or some other variables. You also can suppose some average distribution and "homogeneous" conditions. Then you can write your "balance" in short form that we call DE. Some of these short forms are very good studied and thus we say that we can solve the equation analytically. There are some strange numerical "magic”, I guess. Probably it is the result of our way to see our world.
But every try to solve numerically this "shortened form of a balance" inevitably lead you to the discrete mathematics. You should convert the "shortened form of a balance" to its original form. If you do not you solution probably will fail.
Thus, if the amount of the photon (like in the example before) is small, you can count up these objects one by one. In this case the using of the “abbreviated form" will lead you to the normal, not a DE because of the small number of objects. The small number of objects does not allow the "limit" function and as a consequence do no allows using the derivatives. For example, if I have 100000 euro 1 euro is not a huge amount for me, but if i have only 3 euro thus 1 euro is very important money. We know that "lim" is only the statistical or the averaged function. Whereas we know that statistic as any approach has its own limitations.
Thus, here is no any contradiction, only the question of the correctness of the use.
Excellent Stanislav!,
Your description clearly illustrates the actual process used by those approaching new problems.
You also illustrate clearly an example of choice by the informed observer of a physical insight into an observed real behavior. A choice of a particular mathematical tool among many tools with a goal to achieve some level of predictive match to the that phenomena.
Mathematics is a beautiful human construction but nature does not follow mathematics, rather men follow nature and approximate some of what they observe and measure with mathematics.
I do not agree. Please read carefully the proof of Godel theorems. Please red the Cox theorem. We have had and still we have execellent mathematicians that just study the relation of mathematics with physics and our thinking. One of these excellent mathematicians says" All the mental entities that are verified experimentally are called mathematics".
Dear Dr. Joseph Uphoff, you seems wandering out of the area of Dr. Marek's question. He asks about the situations (and gave an example of the situation) when the mathematics (or at least the conventional mathematics seems do not works properly).
Exists the frontier between continuous and discrete? How we can know where it is? What we should to do when we stay close the frontier?
I understand you correctly, Dr. Mareck? You actually make a very interesting philosophical question.
There are many examples of this in physics. You.Dr. Joseph, is talking about quantum physics, hence, I will try to bring a little bit of humor to the conversation. Here going one simple, well known, quantum mechanics joke:
What the photon really is?
If the E=h*nu that means than the photon have only one frequency, that implies than the photon must be infinite in space. What means the speed of light in this case? Is the photon omnipresent? Is the tele-portation possible?
For the other hand, in many books photons are described as a bunches or pulses of electromagnetic radiation (in space). Thus, the Fourier transforms tell us that the photon should have a spectral width. In this case the E=h*nu seems an absurd. What value of nu quantum mechanic is talking about? If photon has spectral width there are many values of nu.
Thus, what mathematics we must apply Quantum mechanics or Maxwell approach, here is a question? Is the quantum mechanics real? What h really means? May be the classical Electrodynamics describes a real situation?
I am sure that you, Dr. Joseph, know the response to this joke.
Actually, my tricky question is not only a simple mental game. This is a key to correct understanding of light-matter interaction. Also this is a key to understanding how we should write the equations correctly.
My point is: the mathematics is the virtual tool; it is the first virtual tool that was manufactured by mankind. If the tool do not works properly, the worker is guilty not the tool. The tool could not invent or create anything, it’s not alive, and he has no desires. The tool only obeys to the worker. If the tool do not work properly worker must improve the tool, or improve his own skills.
But anyway, in my joke, what the worker must improve the tool or his own skills?
Best regards.
Sorry there is something that I am not understanding well .. would you help also me to understand? Quantum mechanics runs about three foundation, quantization, indeterminism, quantum interference. Discrete eigenvalues appear in Schrodinger equation that possibly is correct . Have we not discrete versions of DE? . Please, sorry I am unable to focus the problem correctly and your question seems very important.
No I need to understand very well .. it is the whole question that is missing about something for my understanding.
Stanislav, you got it. You seem to feel perfectly the spirit of my question.
I continue not to understand in deep. I apologize .
It from qubit was the title of David Deutsch (Deutsch,
2002), a paper due to the celebrations of John
Wheeler's 90th birthday. Let us consider his
illuminating words of John Wheeler’s: “Really
Big Questions”, the one on which the most
progress has been made is It from Bit?-does
information play a significant role at the
foundations of physics? It is perhaps less
ambitious than some of the other Questions,
such as How Come Existence? Because it does
not necessarily require a metaphysical answer.If we regard a flight, as example, as
consisting of a literally infinite number of
infinitesimal steps, what exactly is the effect of
such step? Since there is no such thing as a
real number infinitesimally greater than
another, the continuum is and remains a very
natural idea but we cannot characterise the
effect of this infinitesimal operation as the
transformation of one real number in another,
and so we cannot characterise it as an
elementary computation performed on what
we are trying to regard as information. For this
sort of reason, It from Bit would be a nonstarter
in classical physics. In quantum theory,
it is continuous observables that do not fit
naturally into the formalism.
According to the previous rules, as said,
each Boolean observable of Q changes
continuously with time, and yet, because of the
central relations before mentioned, retains its
fixed pair of eigenvalues which are the only
two possible outcomes of measuring it.
Although this means that the classical
information storage capacity of a qubit is
exactly one bit, there is no elementary entity in
nature corresponding to a bit. Therefore,
Deutsch’s conclusion is that it is qubits that
occur in nature. Bits, Boolean variables, and
classical computation are all emergent or
approximate properties of qubits, manifested
mainly when they undergo decoherence.
Stanislav, also I, sometimes, ask myself 'what a photon really is' and the considerations that in your eyes constitute a yoke are arround in this forum whenever it comes to discussions of the foundations of quantum mechanics. Since your professional live seems to hold close companionship to photons, you are probably the right person to answer the following question: Do you know a theory which describes the propagation of photons (down to single ones) through optical instruments (particularly imaging ones) and thus gives quantum-optical explanations for diffraction and resolving power? What do you think about the Bialynicki-Birula wave function description of photons? (The last queation makes the connection to the main question since it obeys a deterministic hyperbolic differential equation for evolution (in time) and thus describes by these means what many see as the maifestation of discreteness par excellence, photons.)
As example in classical physics ....With added Lipschitz condition or not?
Dear Ulrich,
Actually you right, I live with the photons and I know how one can describe the propagation of electromagnetic fields through optical instruments and solid bodies. Actually, I do this it is my job, and it is works well. I will respond your question, but some days later, now I have a huge problems with a pulsed laser that I must design. It is extremely tricky and I am very busy. I am sorry for delay.
Best regards.
Dear All
I feel happy when I read all such nice answers , each one considered the question according to his point of view and then gave his answer .
Some answers seems to be pure physics others like philosophy and all branches of science can interfer to orient discussion toward each domain .
I think all answers together can be combined to give an answer to Dr Marek ,
which is simply : Yes ,Differential Equations can describe reality
(Of Course this depends on the definition of reality)
because differential equations are the tools that can describe motions of particles all particles classical (big bodies) Wind ,planets ,Stars, galaxies the big bang ,etc, or quantum (electrons , photons , ...) as well as it can describe probabilities , numbers , discrete , continuous phenomena , the motions of every thing ( it depends on time) .
On the other hand , if our feelings included or the power of life (Spirit) is considered
as a part of Dr. Marek reality , I think we need to search another tool to
measure happiness , sadness , Love, ... and of course it is not the differential equations.
best
I. Kaddoura
In psychology there are differential equation models describing love between two persons. Still I ask you ... but including or not Lipschitz condition?. It is not a so trivial question.
Dear Elio,
I always considered assumptions such as Lipschitz condition belonging to the kind of problems to which R. Feynman refers when he writes in 'Quantum Mechanics and Path Integrals' p.94: "The physicist cannot understand the mathematician's care in solving an idealized physical problem. The physicist knows the real problem is much more complicated.It has already be simplified by intuition, which discarts the unimportant and often approximates the remainder." Do you have a specific problem in mind, where the fulfillment of the Lipschits condition plays an essential role and makes a physical difference? I would be interested to see.
Dear Stanislav,
thank you very much that you keep thinking about my question. I'm not in hurry with the problem but would very much like to hear your oppinion some day. Further I wish you good luck in your struggle with your laser design problem.
Really, I wanted to be part of this important thread (I think all Marek's questions are important) long time ago, I was waiting to find the time to write a thoughtful answer. But Issam's answer let me jump in and share with you my thoughts.
Issam, I think your answer is also a philosophical one.
Marek: I asked myself such questions many time long time ago. I believe that we are far from the right mathematical tool that can exactly model reality. Einstein also tried to do that. He wasn't convinced by the PDE tool (Shrodinger equation) that introduced a probabilistic solutions.
I am not only agree with you, but also I am very convinced that our world is indeed discrete. Continuity and discreteness is just a matter of perception. We invented it as we invented the negative quantities. Continuous space is turned discrete when we changed the scale (your STM example).
However, PDE is the only tool, to my understanding, that can describe really, i mean our changing world!
Pardon me, how one can use Graph theory to model our dynamic world? would it take us back to difference equations and Finite Element?
Kind regards!
Mostafa
Ulrich, In my modest opinion the problem is so serious.
It is a celebrated statement that the classical dynamics of physics describes
systems to be fully deterministic. It is still a paradigma that Nature exhibits determinism. at the macroscopic level of description pertaining to classical physics. The governing equations of classical dynamics derive from
Lagrange equations, from variational principles or from Newton’s laws of motion.
However, we must be careful in admitting determinism so at all. Determinism does notarise from this contextual physical framework. Our abstract mathematical level of reasoning and our precise thinking tendency to couple our statements with ordinary experience, lead directly to determinism. However, starting from this framework, often we do not sufficiently
evidence or imprudently it is given silent that, in order to satisfy the requirement of determinism, we do not use only the previously mentioned physics. We add a posteriori a further relevant restriction. We force to coexist the governing equations of classical dynamics and given initial conditions with an ad hoc added mathematical restriction. In order to obtain that our systems actually exhibit the claimed determinism, we impose to all such theoretical edifice of physics from the outside, that the differential equations describing a physical system, must satisfy the so called Lipschitz condition with the basic consequence that all the derivatives that we introduce at the mathematical and physical level, must be bounded
We are used to admit reality going in a certain conceptual direction as it derives from our macroscopic experienced reasoning. Really, there is not another way for arising determinism. We admit it by an ad hoc assumption. It seems to respond more to and our kind of wishful thinking than other. And in fact we pay dear for suchour tendency. Let me give only one example. It is
simple but very convincing. Take a particle in a one-dimensional motion decelerated by a friction force
F(v)=-kv with m and v mass and velocity of the particle. Invoking the ad hoc assumption that F must satisfy Lipschitz condition and restriction we have the solution that is currently exposed in textbooks. We have v=v0exp(-kt).
We unrealistically accept that the particle has v→0 (v goes to zero) for t →∞ .(t goes to infinity)
We admit that the velocity of the particle goes to zero only after an infinite time. This is an abstraction in contrast with all that we actually observe
with the experience. However, this is what we accept and it results to be consolidated as arising from classical physics about such system. Really the matter does not go in this manner. We pay dear for such assumed
Lipschitz condition, and we accept consequently that the particle approaches
the equilibrium ( v = 0) (v equal to zero) after an infinite time while instead really it approaches such physical condition in a finite time. Such
physics describes an unreal situation. Let us assume instead that the law of
motion is F(v) -kv –k1v^alfa ( I want say… with exponent alfa). You see that the two
equations, are very similar with the only exclusion of a small neighbord of the equilibrium point ( v = 0)(v equal to zero) with the fundamental difference that this time at this point the Lipschitz condition
is violated. The little but substantial difference between our usual manner to
solve this problem gives enormous differences on the conceptual plane and on the plane of the Newtonian dynamical results. First of all, using the final equation with alfa , correctly we have now that the time of approaching the equilibrium, v = 0 (v equal to zero), is finite as it must be. Lipschitz violation is a simple mathematical elaboration of we actually have of course systems in Nature violating such conditions. Please give alook at our papers as example “ On the possibility to thibk in a quantum probabilistic manner” .. or “ On a simple case of possible non deterministic – Chaos.. “ on Chaos, Solitons and Fractal and to a number of other publications that we have published on this matter.
So, the problem certainly is .. are differential equations…. But we should add… and what is the proper manner in which Lipschitz condition must be taken in consideration? It is clear that , pending on the manner in which we solve this problem, we have a totally different vision of reality. I repeat.. non deterministic-chaos.
Elio
Your reasoning is very curious and interesting. I only have one answer to give you: try to substitute the integer oder derivatives, that do not impose causality, by fractional derivatives that impose causality without nothing extra.
Manuel, I like to speak about general principles. The First ..........violation of Lipschitz condition is added from the outside.
The second important point ... consider first the basic equations of physics. Thre is time to consider after other detailed examples.
The third point. I have not suggested to dismiss determinism definitively , I have said that there is a number of cases , also at experimental level, in which violation of Lipschitz condition could be ...... and looking at my publication list and citations of other authors, you may find some of such studied cases.
Elio,
do you really think you have 'to invoking the ad hoc assumption that F must satisfy Lipschitz condition' if you are given F = -kv? Actually F satisfies the condition as it stands without assumptions being invoked.
Your discussion of the friction damped motion suffers from a serious defect: It ignors that any mathematical model of physical processes has a limited range of applicability. In our case, we all know that friction results from interactions with molecules in the environment. Assuming k>0 means that we assume such an environmental medium being present. We know that our particle, when having lost its initially present kinetic energy will perform Brownian jitter-motion in this medium at a mean velocity . The nice solution v(t) = v(0) exp(-kt) says that in finite time we reach and thus are in a situation to which the original model was not made, and to which a practically minded person would never apply it. So it is exactly the situation to which my previous Feynman citation refers.
You seem to hold a misguided oppinion concerning the role of the Lipschitz condition. It is not at all a necessary condition for ensuring a deterministic behavior of solutions. It is the most simple condition suitable for writing elementary textbooks. The most obvious violation of determinism in classical dynamics (quantum dynamics isn't deterministic anyway) comes from initial conditions: collision orbits in celestial point mechanics. Of course, these are only artifacts of the underlying idealisation. For extended bodies collisions can actually happen and give rise to an evolution in a higher dimensional phase space.
Do you see that the problem is so seriious? The concept of unpredictability in deterministic classical dynamics was introduced in relation to the discovery of chaotic motions in non linear systems. As known, such motions are caused from Lyapunov instabilioty characterized by a violation of a continuous dependence of solutions on the initial conditions during an unbounded time interval . In this manner in these systems unpredictability goes on gradually. Having two initially close trajectories diverging exponentially, then for an infinitesimal initial distance going to zero, the actual distance becomes finite only at time going to infinite. Lyapunov exponents become mean exponentail rate of divergence and are defined in an unbounded well known time interval .
In distributed dynamical systems , described by partial differential equations, it exists a stronger instability. As yoy certainly know, it was discovered by Hadamard. In the course of this instability , a continuous dependence of a solution on the initial conditions is violated during an arbitrary smal time period. This is a kind of blow-up instability and it is caused by the failure of hyperbolicity and transition of ellipticity.
The basic thesis is that a similar kind of a blow-up instability leading to " discretization" ... to " discrete pulses " (call it so if you want as of course M. Zack outlined it just in 1989 9( Non Lipschitzian Dynamics.... Appl. Math. Lett. vol.2,1, 69-74) ..... I was saying ... a similar type of blow-up instability leading to "discrte pulses" of unpredictability can happen un dynamical systems described by ordinary differential equations , if at some limit sets / We may think in particular at the equilibrium points..... ) the Lipschitz condition is removed or dismissed if you like. As consequence failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to amultiple-choice response to an initial deterministic starting condituion or input , if you like. This situation changes radically our standard and traditional vision and approach. It is certainly vtrue that , in my first question, I posed some trivial example . It was to introduce the question... that of course we must acknowledge .. is so serious , instead.
Elio,
is this essay intented to be a respond to my critique of your last contribution?
If I wrote it , I hope it was satisfactory.. The question is about differential equations and possibility to describe reality. I think to have given a proper answer.
No entailing laws, but enablement in the evolution of the biosphere
Giuseppe Longo ,Maël Montévil, Stuart Kauffman
https://www.researchgate.net/profile/Giuseppe_Longo2/
"One can say in full generality that a mathematical frame completely handles the determination of the object it describes as long as no strong enough singularity (i.e. relevant infinity or divergences) shows up to break this very mathematical determination [Bai91]. In classical statistical fields (at criticality) and in quantum field theories this leads to the necessity of using renormalization methods [Fis98, ZJ07]. The point of these methods is that when it is impossible to handle mathematically all the interaction of the system in a direct manner (because they lead to infinite quantities and therefore to no relevant account of the situation), one can still analyze parts of the interactions in a systematic manner, typically within arbitrary scale intervals. This allows us to exhibit a symmetry between partial sets of “interactions”, when the arbitrary scales are taken as a parameter. In this situation, the intelligibility still has an “upward” flavor since renormalization is based on the stability of the equational determination when one considers a part of the interactions occurring in the system. Now, the “locus of the objectivity” is not in the description of the parts but in the stability of the equational determination when taking more and more interactions into account.
This is true for critical phenomena, where the parts, atoms for example, can be objectivized outside the system and have a characteristic scale. In general, though, only scale invariance matters and the contingent choice of a fundamental (atomic) scale is irrelevant. Even worse, in quantum fields theories, the parts are not really separable from the whole (this would mean to separate an electron from the field it generates) and there is no relevant elementary scale which would allow ONE to get rid of the infinities (and again this would be quite arbitrary, since the objectivity needs the inter-scale relationship), see for example [ZJ07]. In short, even in physics there are situations where the whole is not the sum of the parts because the parts cannot be summed on (this is not specific to quantum fields and is also relevant for classical fields, in principle). In these situations, the intelligibility is obtained by the scale symmetry which is why fundamental scale choices are arbitrary with respect to this phenomena. This choice of the object of quantitative and objective analysis is at the core of the scientific enterprise: looking only at molecules as the only pertinent observable of life is worse than reductionist,
it is against the history of physics and its audacious unifications and invention of new observables, scale invariances and even conceptual frames."
Don't you feel uneasy looking at the celebrated Weierstrass function, nowhere differentiable, but nevertheless regarded to be a rightful solution of the Schroedinger second order differential equation? See Phys. Rev. Lett. 85(24), p. 5022-5025 (2000) for details. Should we think about this example as of the proof of extremal flexibility of DE or as of extremal abuse of DE?
Probably a non-linear Schrödinger equation. Has nothing to do with physics anyway.
Of course a possible link between deterministic -chaos and quantum mechanics is intriguing and the research , as also Aher Peres outlined years ago, is going on. Landau? Jona Lasinio?
Ulrich: no, the authors deal with classical, shamelessly linear Schrödinger equation.
Ok, with a sufficiently jumpy potential nearly everything is possible. Can you provide details?
Precisely Erwin Schrödinger, already in 1914, investigated this issue:
Ann. Phys. (Leipzig) 44(1914)916 (in German)
http://gallica.bnf.fr/ark:/12148/bpt6k15347v.image.langEN.r=annalen%20der%20physik.swf
V=0 everywhere, without a single jump or discontinuity, is fine. I'm not sure, but the Weierstrass function is most likely only loosely related to celebrated Zitterbewegung (Claude, did you mean that? - the link is incorrect), out of reach for experimentalists, for a while.
No, the Zitterbewegung is with the Dirac equation, 18 years later. Schrödinger studies a model of point masses and springs ([About the dynamics of systems of elastically coupled points.]) Copy-past the link, it works.
Differential equations can be considered as a proper tool in describing reality because non-linear dynamics can be described as the proper modeling in space-time geometry which can describe the reality in another way.
Sorry Tamoghna may you explain in more detail waht do you mean? Non linear time series cannot be reconstructed in phase space? Non linear differential equations?
Sorry, I have difficulties to understand correctly!
To extend well-known continuous models (Dirac equation, Laplace equation) to discrete case one can consider these equations of mathematical physics on the time scales that are unifying discrete and continuous models.
See for example:
Gro Hovhannisyan “Dirac equation on a time scale.” Journal of Mathematical Physics, 2011.Vol.52:10, 16 pages, DOI: 10.1063/1.3644343
Gro Hovhannisyan “Poisson’s inequality for a Dirichlet problem on a time scale.” Communications in Applied Analysis, 2012, 16:3, 415-422
But be careful, because most papers on time scales use the (wrong) delta derivative that is anti-causal. Any way, I suggest you to look at the Fractional Calculus and at the discrete models that lead to the most important realizations of our daily life and interesting models in biomedical applications.
Although most papers on time scales do
use delta derivative they can be easily adjusted to nabla derivative which is causal.
You mean substituted. Besides the associated Laplace transform is also incorrect.
Besides there are systems that result of interactions of small systems that are not easily described by differential equations.
Of course I know. I work in Signal Processing a long time ago. I work in Fractional Signal and Systems since 1994 and I am generalizing previous results to time scales. I belong to a (small) group that works in Biomedical applications.
Ulrich,
My point is that theory of dinamic equations on a time scale
(see M. Bohner, A. Peterson, Dynamic equations on time scales:
An Introduction with Applications. Birkhauser, Boston, 2001)
is a better tool than
differential equations to describe reality. Manuel arguing that "Fractional calculus"
is a better tool, sionce there are some gaps in theory of dinamic equations on a time scale.
I thing there is no global tool in math that describe everything
like string theory in physics.
Of course , the style is the first thing required to be maintained. I was missing to outline this just on the course of a previous question. Let us attempt tp dismiss the haughtiness . No one of us is in direct contact with God.
Manuel , please, sorry .. a courtesy .... Also I am in analysis of time series data relating biomedical signals. I am so much interested on R-R sigbals. Did you perform past experience on this matter?.