It depends on the use or not of non null initial conditions and it depends on the input. Let me give you an example.
D^a f(t) + A f(t) = D^b g(t)
with zero initial conditions. If g(t) is the Heaviside unit step, the output is null if the derivative is Caputo. It g(t) = delta(t) is also null. If the initial conditions are different from zero both Caputo and Riemann-Liouville use different and wrong initial conditions leading to different solutions. If g(t) is a sinusoid and the equation is defined on the whole real line, both give wrong result. This does not happen if you use GL or Liouville derivative.
of course , from my experience solution of fdq by Caputo derivative definition is differ to solution of the same fdq by Reimann derivative definition .
It depends on the use or not of non null initial conditions and it depends on the input. Let me give you an example.
D^a f(t) + A f(t) = D^b g(t)
with zero initial conditions. If g(t) is the Heaviside unit step, the output is null if the derivative is Caputo. It g(t) = delta(t) is also null. If the initial conditions are different from zero both Caputo and Riemann-Liouville use different and wrong initial conditions leading to different solutions. If g(t) is a sinusoid and the equation is defined on the whole real line, both give wrong result. This does not happen if you use GL or Liouville derivative.
Richard raises a question that created most os the dificulties in the development of FC. Liouville that was really the father of FC introduced several derivative definitions, but he worked on R and based his definitions on the exponential and on the Laplace transform. As the Bromwich integral was unknown he was unable to solve some of the difficulties and did not resist to the "atacks" of the mathematicians that imposed definitions on R^+.
From this to the introduction of the "starting point" was a small step. But this is the main point of difficulties in applications. Consider the situation f(x) defined on [1,2] and g(x) defined on [4,8]. What is the C or RL derivative of f+g? what relation does it have with the derivatives of f and g? It's a problem that shouldn't exist. It is not very clever to have one derivative definition for each function. We need a derivative definition for many functions, even if we have to enlarge its application domain as did L. Schwartz. So, in my opinion we must use ALWAYS derivatives defined on R, not on any subset in R. It's the same problem we find with the Fourier transform. Why don't we defined a Fourier transform on [a,b]? What stop us? Nothing. Why don't we do it? Because such FT would loose most of the properties and meaning that the FT enjoys. THe situation in FC is similar. Why should we define a derivative on R^+ or in [a,b]? There is no special reason to do it. You may argue: but how do you solve problems for t>=0? Introducing the Heaviside unit step as observation window. With it I can solve all the problems (recently, I did it with the logistic equation), and introduced a coherent formulation of the initial value problem.
I called "walking dead" derivatives to RL and C, because, soon or later, Engineers, Physicists, and other Scientists will discover that they are useless and will stop using them.
Hi Richard, I agrre with the second part of your comment, let me tell something else. There is no theoretical or practical result got with RL or C that cannot be obtained by GL or L derivatives. All the attempts based on RL or C to geeralise the classic results like Stoke's theorem and the fractional vectorial calculus failed. Calculus can you define "definite fractional integral" based on RL or C? Most applications that I know to modeling real world systems were based on GL derivative. Even those that claim having used RL or C used results not correctly deduced. The easier appraches to modeling for instance, fractional capacitor, human body impedance, batterie, and so on are based on frequency measurements. The derivative of a sinusoide given by RL or C is not a sinusoide. So you cannot use them.
Welcome to the discussion. Let me disagree with you and many that use C derivative due ti initial conditions. This was discussed in FDA. There many people that is arriving to the conclusion that C gives wrong solutions due to the wrong IC. See my papers in attach. Let's do a simple reasoning. We have a fractional system, made of fractional components, for example, an alectric circuit with fractional coils and capacitors. Why shoul we expect that the IC are stated in terms of integer order derivatives? More important, why shuld the IC depend on the tool used to analyse the system? The IC depend on the structure of the system, not on the derivative used to create a model. What about physical meaning? why shouldn't a fractional derivative have a pysical meaning if we find fractional behaviour everywhere? Besides, the we must make a distinction between the input-output relation from the state-space representation. In this one iy is natural to have IC that are expressed in terms of the values of the function at the initial-time, not involving deivatives.
1) Under zero initial conditions, the top two popular FDs, both Riemann-Liouville (RL) and Caputo derivatives give the same result.
2) But since the RL derivative of a constant is not zero, problem arises when solving problems with finite boundary (initial or final) conditions. This peculiarity occurs due to the order in which differentiation and convolution are undertaken as per the definition of the RL derivative. However, even in that case RL derivative can give correct result, provided suitable corrections (scaling) is adopted to deal with the initial conditions.
If you look at my first answer you'll see a simple example that gives different results under zero IC.
The problem of the constant function is ubiquituous: In fact both derivatives make a confision betewnn the Heaviside unit step and the constant function (this is defined and assumes a given value on the whole real line). This is very important, because we need to compute the derivative of the delta distribution that is the derivative of the Heaviside function. If C gives zero for this one, it gives zero for the delta. It is not very clever.
I agree with your comments. But there are limitations with GL definition too.The class of functions for which GL definition holds is very narrow, please see page 57 of Podlubny's book. Even though for most physical problems defined by smooth functions, GL derivative is sufficient, yet we cannot overlook the limitation.
Moreover, the GL definition has some non-integral terms too which makes it look clumsy. On an interesting note, GL derivative is a special case of RL derivative, see page 62 of Podlunbny's book.
That statement is not correct. First: the summation must go till infinite, Second: all the functions that have C or RL derivative have also GL derivative. On the other hand, all the bounded on the left continuous functions have GL left derivative. P. ex. e^t.
Send me a mail to [email protected] and I'll send you some bibliography.