Can anyone suggest an analytical method to solve the ODE a^2 y = y^3 - y''' - y', where a is a positive constant? The solution should have y(0) = 0. Also y -> a and y' -> 0 as x -> infinity.
Pradeep: You're right, an additional boundary condition is needed. We can assume that y' tends to zero as x tends to infinity. I'll modify the problem statement to reflect this. Thanks!
No reason I see that it should have an analytic answer. I suggest numerical shooting method from infinity (start in the unstable manifold of the equilibrium) back to t=0.
will start thinking to Weierstrass elliptic function which satysifies ℘′2(z)=4℘3(z)-g2℘(z)-g3 and then continue further to include these derivatives y''' + y' instead of the ℘′2(z).
I think the collocation method is the best choice for solve this probelm
see the following papers:
A Jacobi-Gauss collocation method for solving nonlinear Lane-Emden type
equations
Communications in Nonlinear Science and Numerical Simulation
ii- E. H. Doha, A. H. Bhrawy, D. Baleanu, R. M. Hafez , Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type, Computational Mathematics and Mathematical Physics ,September 2013, Volume 53, Issue 9, pp 1292-1302
I agree with Champneys; there is little change of getting an analytic answer. The best you are likely to get is a numerical solution based on the unstable manifold of the equilibrium y = a.
After some calculations I found that linearizing around y = a gives a linear system with one unstable (real) eigenvalue and two stable eigenvalues. The stable eigenvalues can be real or complex depending on the value of a. (The critical value of a seems to be \sqrt{2}/(3\sqrt{3}).) This gives a two-dimensional stable manifold and y' -> 0 exponentially fast as x -> infinity in the stable manifold.
This gives a two-dimensional family of solutions that can be integrated backwards in time towards x = 0 where we want y(0) = 0. Unfortunately there may not be a unique solution. In fact there may be a continuum of solutions unless you can come up with another constraint on the limiting behaviour as x -> infinity *or* at x = 0.
I think it can be solved analytically by using the method (Homotopy perturbation method) by splitting the equation to linear part and nonlinear part. See HPM, for nonlinear differential equation.
I concur with other opinions; it is hard to find an analytic solution to this problem. However, if you are willing to trade less information for more insight, then a qualitative analysis can be conducted. Several answers have pointed in the direction of reduction of order for this nonlinear third order ODE. KY Wang almost got it right (unless I missed something). In general if you have the ODE y'''=f(x,y,y',y''), then you can introduce the variables v=y' and w=v' so that you end up with the following system of first order ODEs: y'=v, v'=w,w'=f(x,y,v,w) where f(x,y,v,w) = v+y^3-a^2*y. At a glance it appears that the only steady state is (0,0,0)... more could be said by exploring the qualitative behavior of this system (i.e. look into bifurcations obtained by varying parameter a).
@ Juan: There is a second fixed point, (a,0,0). David Stewart discusses the stability of this fixed point in a comment made earlier today. Thanks for your comment.
@Mark & Paolo, you are absolutely right about the fixed points... "at a glance" is always a bad method to proclaim a solution. I should have looked at the equation for two additional seconds :) These three steady states are valid. It only takes a little more effort to decide stability of the fixed points... but is that an acceptable solution for this problem? Mark asked for an analytic solution; this qualitative analysis, as rigorous as it can be (i.e. calculate Jacobian, evaluate it at the fixed points, and look at the roots of the characteristic polynomial to determine stability), falls short of having a closed solution. In any case, a qualitative analysis can inform *analytically* trajectories and stability without doing any numerical estimation of solutions. If somebody has the time to try different avenues suggested in this thread, it would be interesting to know what works to find an analytic solution to this problem.
Possibly, You can use the asymptotic of solution at x = 0 and x -> infinity. The equation reduces to linear in both cases. If one assumes y is small at x = 0, the cubic term can be neglected. On the other hand, at the infinity one can write y = a + Y, where Y is another small function satisfies the condition Y'=0 at x -> infinity.
After the asymptotic at both limits is found, it can be possible to make some interpolation at the interval where it becomes unsuitable.
Your idea is correct, (although the first order DE is not easy at all), but should be T=y^-10/3V instead of T=y^3/2V. I'll check the calculations again ¿could you test yours?
I have written my opinion/approach before going out of the house to fly to Frankfurt for long ago planned meeting with some colleagues from USA (from there I have flown to Paris to Conference). Therefore I did not take my manuscript with detailed calculations. If I will spare time, I will do computation over again. I am sorry that I have no time to do it immediately.
Previous, I presented a wrong solution,which has been deleted by me myself, I am sorry for that.
Now I propose the following consideration for this equation:
1. Let z(y)=y', then this equation can be transformed into
z z'' + z^2 z' + z =y^3 - a^2 y ..................................................... (1)
This equation is most probably not integrable by the quadrature. I am trying to prove that it does not admit any global Lie group, using the method I used in [1].
2. This equation can be qualitatively studied as follows:
Let x -> t, y -> x, y' -> y, y'' -> z, then the equation a^2 y = y^3 - y''' - y' becomes into the third order autonomous (polynomial) system
x' =P(x,y,z) = y
y' =Q(x,y,z)= z ...............................................................(2)
z' = R(x,y,z)=-y - a^2 x + x^3
This system is conservative, since P'x + Q'y + R'z = 0(P'x means the partial derivative of P to x). So, it has no asymptotically stable fixed point. In fact,
system (2) has three equilibrium points, p0=(0,0,0), p1=(-a,0,0) and p2=(a,0,0). These fixed are all saddle-focus type. p0 has a two-dimensional unstable manifold and a one-dimensional stable manifold, while both p1 and p2 have a one-dimensional unstable manifold and a two-dimensional stable manifold. So, if the two dimensional stable manifold M^s(p2) of p2, intersects with the half plane where x=0, y>0, then
the problem of Prof. R. Mark Bradly has infinitely many solutions, since the intersection between the manifold M^s(p2) and the half plane mentioned above is a curve, any point on this curve can be the initial point, the corresponding will go to the point P2 spirally as t -> infinity.
Clearly, to make the exact judgement on behavior of the two dimensional stable manifold M^s(p2) of p2, some numerical work is still in need.
3. For the concrete calculation of an approximate solution required, it is better to choose an initial point in a very small neighborhood of P2, and integrating the system along the negative direction of time, if the integral curve touch the half plane where x=0, and y>0, then you get a solution. The attached two graphs shows the three dimensional integral curve and the corresponding required solution, in the case a=1. The initial values are x=1, y=0.001, z=0.001.
From the present research result, I think the equation and the solution proposed by Prof. R. Mark Bradly is really interesting in math. It relates to the investigation on the global behavior of two-dimensional stable manifold M^s(P2) of the equilibrium point P2. I hope this equation has some interesting background.
By the numerical method mentioned above, 10 different integral curves are obtained in the case a=1. These curves expand part of the two-dimensional stable manifold M^s(P2). See the left one of the attached graphs.
And this part of M^s(P2) has an intersection curve with the half plane x=0, y>0 (see the right one of the attached graphs).
Any point on the intersection curve can be treated as an initial point, from which the integral curve of the system (2) will go to P2 spirally as t -> infinite. And this integral curve is corresponding to a solution required.
Are you sure that the a^2y and y^3 terms have the correct sign? Here are my thoughts: If y -> a for large x, then y''' should become negligible at some point. Solving the first order ODE y'+a^2y-y^3 = 0, I find y_1 = a/sqrt[1+Cexp(2a^2x)], which tends to zero for x-> infinity. However, for x->-infinity, y_1-> a. On the other hand, the solution of y'-a^2y+y^3 has x replaced by -x, so that is shows the desired behavior for large x.
It is a good suggestion to study the behavior of the solution y[x] satisfying y->a as x-> positive infinity by its linear approximation.
Assume that y[x] = a + y1[x], when y[x] -> a, y1[x] ->0. Then it is easy to see that y1[x] satisfies approximately the following linear ode
y1''' + y1' - 2 a^2 y1=0.
Let l1, l2 and l3 be three eigenvalues of this third order linear ode,. They satisfy
l^3 + l - 2 a^2 =0
In the case a >0, It is easy to see that there is only one real eigenvalue, say l1, and a pair of conjugate complex eigenvalues, say l2 and l3. It can be proven further that, l1 >0, and the real part of l2 and l3 is a negative number. So, the linear approximation of y1[x] can be written as
y1[x] = c1 e^(l2 x) + c2 e^(l3 x),
And, the approximate solution required has the following form when x -> positive infinity
y[x]= a + c1 e^(l1 x) + c2 e^(l2 x)
Note: The eigenvalue l1 is related to the one-dimensional unstable manifold M^u(P2) of P2, while l2 and l3 is related to the two-dimensional stable manifold M^s(P2).
I noticed that my previous work is just in accord with the idea proposed by A. Champneys and David Stewart. It can be determined now that, the two eigenvalues corresponding to the two-dimensional stable manifold M^s(P2) are just a pair of conjugate complex numbers with non-zero imaginary part when a>0, since the polynomial equation l^3 + l - 2a^2=0 has only one real root.
'Oh trivial', I think Henk Smid is reasonable in the sense that there is not any interesting attractor to the system (2) , for it is conservative. However, if go little further from the equation given by R. Mark Bradley, let the new equation be
a^2 y = y^3 - y'''- b y'' -y', ............................................(1')
where b is a positive parameter, then some interesting phenomena may appear. The system (2) will become
x' =P(x,y,z) = y
y' =Q(x,y,z)= z ...............................................................(2')
z' = R(x,y,z)=-y - b z - a^2 x + x^3
the new system (2') is not conservative, has some attractive effect. For instance, when a=1, b=1/3, then (2') has possible a non-trivial bounded attractor, see the attached graphs. The 3 graphs in the first line are integral curves from different initial points. The 3 graphs in the second line shows, from 3 different viewing angles, that the 3 integral curves converge to a non-trivial attractor. At present stage, I do not know if this attractor is a simple closed curve or some other thing.
I do not know if R. Mark Bradley can find out any application to this new equation (1'). I hope he can :).
Through more detailed calculation, I found that the attractor in the case, a=1 and b=1/3, is a closed curve, which is shown in the attached graphed through different viewing angles.
However, I don't know how to prove this conclusion analytically and strictly.
The dissipative version of this system with a = 1 and b = 0.7 is exactly case MO5 in my Elegant Chaos book. It has a strange attractor with Lyapunov exponents 0.1380, 0, -0.8380 for initial conditions of (0, 0, 0.1), but it seems we have strayed quite far from the original question.
I suspect that the equation a^2 y = y^3 - y''' - y' posted originally is wrong... The boundary value problem posed does not have a sensible physical solution! This can be seen as follows. Rewrite the problem as
y''' + y' = y (y^2-a^2) [y(0)=0, y(infty)= a, y'(infty) =0]
It can be seen immediately that the equation has the solutions y=0, y= a, and y= -a, which of course do not solve the boundary value problem. We also see that the right-hand side is negative for y between 0 and a and positive for y > a.
The sought-for solution should not be oscillatory at infinity as y' is required to go to zero. But then for large x, we will always have y'''t^2 dy/dt and y''' ->-6 t^4 dy/dt-6 t^5 d^2y/dt^2 - t^6 d^3y/dt^3, which shows that for sufficiently small t, y' will always be bigger than y''' (if the derivatives with respect to t all remain finite at infinity).
But with the equation
y' = y (y^2-a^2)
just argued to be valid for large x, we have that y' infinity (except for the constant solution y=a).
Therefore, I suspect that the equation should probably read
y''' + y' = y (a^2-y^2) ,
if the physical problem leading to it is assumed to have a solution that stably produces the given boundary values. Not that I could solve that analytically ... :-).
Hi, Pof. K. Kassner, you are great! By adding a term, by'' ( b > 0), to the left side of your suggest equation, I have obtained a dynamical system
x' =P(x,y,z) = y
y' =Q(x,y,z)= z ...............................................................(2'')
z' = R(x,y,z)=-y - b z + a^2 x - x^3
which has similar properties to the property of famous Lorenz equation. For instance, in the case a=1, b=0.86, it has an attractor shown in the attached graph through 4 different viewing angles.
From the facts obtained, I think (2') and (2'') may provide large quantity of new and important facts to the dynamical system theory.
So, both the previous system (2') and the present system (2'') have much interest in Math. and Phys. We'd better to have soon investigation to them. I have introduced this development to the website sciencenet.cn in Chinese: http://bbs.sciencenet.cn/home.php?mod=space&uid=553379&do=blog&id=739697.
But just as mentioned by Julien Clinton Sprott, "we have strayed quite far from the original question". I will find other room to discuss this problem deeply.
Why not transform x in [0, infty) to t = x/(1+x) in [0,1) and form successive numerical approximations to y(t) using Bernstein polynomials? The right bc, y' = 0 is trivially satisfied, since dt/dx = (1-t)^2. Then it's computable and can be analytically approximated to some order by choosing coefficients to minimize the residual,
there are some new analytical methods to solve this problem, like "homotopy perturbation method", "energy balance method", "Variational iteartion method, " homotopy analysis method", "variational approach method", "exp-function method",
The proposed diff.. equation does not have any exact explicit analytical solution, nether it has an implicit analytical form, as I said before. The "answers" posted there either:
1. Hypothesize that the original equation was written wrong, but had it were written that, that and that, than the soulution would have bene this, this and this....
2. Propose some simplifications that... also lead to a diff. equation, which has no analytical solution
3. Propose to putting a function in series, which is solvable analytically, which is not an EXACT ANALYTICAL solution per se.
.
Thus, none of those above can be considered as a valid aswer (if the author of the Rq. meant the exact analytical solution). Thus, the only way is just use standard numeration methods/software packages with any accuracy one needs. Which would show you, however that y would not approach a as x -> infinity, etc.
.
IMHO, it doesn't make sense to deviate from the qustion, especially if it might had been written with an error. Whay would someone do such an "exercise"? To increase own RG Score??
Anwar, It is not an anlytical SOLUTION: DTM is an anlytical APPOXIMATION by series iterations, and there are tons of them. And, there is still a discussion whether proposed by Zhou DTM is just the ole good Taylor's series. This one for example:
http://arxiv.org/abs/1111.7149
I am quite sure Mark is well familiar with the Taylor's power series or with the Padé approximation on that matter !
It is an analytical solution. But, the only difference is to use the frequency domain in simplifying the solution. The consequence would be the same if we use the time domain or the frequency domain in the solution and both could be considered as analytical solutions. Right?
@Jasim: I would be very interested in seeing that solution. If it is analytic, it should be possible to give it with general values of a, y'(0), and y"(0). Note that this would not yet mean that the original boundary value problem has been solved, since in it y(0), y(infinity) and existence of y' at infinity [implying y'(infinity)=0] are given but not any values of some of the derivatives at x=0.
In particular, I would be interested in seeing how you treat the double convolution integral that comes from the Laplace transform of the nonlinear term y^3. Normally, Laplace transforms are not suited for the solution of nonlinear equations.
@Bagis: Since the equation posted is an autonomous third-order ODE, it is obviously possible to reduce it to a second-order equation (which in general will not be autonomous anymore). The standard way to do this is to set y'=u(y). Sometimes a more specific form of u(y) is more useful. For example, if we set y'=v(y)^\mu, then the resulting equation will still be second order, but depending on the exponent \mu, its form may already exhibit some simplification. As it turns out, in the case of the equation considered, the simplest second-order equation is obtained by setting:
which is precisely the equation you derived using a slightly more complicated approach (you renamed y to x and my v is your W).
Now, there are two problems with this approach.
1) The second-order equation obtained does not seem to be solvable more easily than the original equation.
2) The second-order equation obtained is in a form that is inappropriate for the original boundary value problem.
As to the second statement, our boundary conditions are y(x=0) = 0, y -> a as x-> infinity and y' -> 0 as x -> infinity. The last two conditions translate to v(a) = 0. But we need a second boundary condition for our second-order equation, and this is not provided by the conditions of the original problem. At y=0, we must have either v''(y)=-2 or v(y) = 0, but neither of these conditions can serve as a boundary condition, as all of the solutions must fulfill either of them. Since we are not necessarily allowed, in our original problem, to require y'(0)=0, which might lead to an overdetermination of the problem, we will most likely have v''(0)=-2, which is not a legitimate b.c. for a second-order equation (the b.c. should contain expressions up to the order of the equation minus one). Moreover, the third boundary condition y -> a does not give us an additional boundary condition for the second-order equation. It seems that this transformation is useful only for a problem, in which we have at least two boundary conditions on y'(x), because those will translate into boundary conditions for v(y).
Otherwise, we would have to solve the problem for v(y) in full generality and then to determine the general solution for y(x) and finally apply the b.c. If we are unlucky, this will not even work, since nonlinear equations may have solutions that are not captured by the form of the general solution.
Why do you want to obtain the analytical solution of such a complex 3rd order nonlinear ODE? What physical or chemical process does it represent?
With the existence of high RAM and high speed PC's and MACs analytical solutions should come as a 2nd consideration.
This problem presents some unusual characteristics whereby the solution domain is semi-infinite and it is required that
y(0) = 0, y(b) = a, y'(b) =0 as b tends to infinity.
This problem should be solved via the Galerkin finite element technique where by at each stage several iterations will be required to obtain a solution. A proper solution will require a new infinite one-D element at x = b.
Alternatively, the problem should be solved for finite value of b and b increased to obtain the solution trend.
Such hard work is only worthwhile if the ODE represents a physical problem in science!
Many thanks for all of your help with this question. This ODE came up in a research project on pattern formation induced by ion bombardment of solid surfaces. A preprint of the paper that came out of this research effort is attached. It will appear shortly in J. Phys.: Cond. Matt. I hope you'll find it interesting!
I realize it is a bit late in the day (or even year) to be posting a reply but I only recently came across this problem and it seems to fit the class of problems solvable using the association of variables technique where an approximate ANALYTICAL solution can be obtained which is valid within a bounded domain.
The solution is based on Volterra Kernels and the association of variables technique for multidimensional transforms and has a lot of success in non-linear electronic circuits particularly in the 1960s before fast computers were readily available. However even today, variations of the method are still used for certain active filter designs as it provides greater insight than numerical and harmonic balance methods. It is particularly useful if analytical solutions are sought for quasi-linear (or weakly nonlinear) circuits within a specific range of values of the independent variable. Since this method of solution was not mentioned in the comments I will describe it in case it is of use to anyone solving these types of problems.
The original equation does not contain non-linearity in the differential terms so the method is applicable. The original equation is then written as:
The form of H1 determines the type of solution obtained. For example if H1 can be partial fractioned without complex factors (i.e. s^3 + s + A1=0 has real value solutions) then these values of A1 (i.e. a^2) show exponentially decaying solutions with cosh, sinh. Otherwise if complex factors are involved the solutions become circular or elliptical in nature and for some values of A1 the solution at inf can be oscillatory. However all the solutions seem to diverge at infinity and I haven't yet found a value of A1 that gives
y->const as t->inf.
Below is the method of solution......
Note that H3 contains 3 variables but to find a solution these need to be associated into just 1 variable 's' by using the association of variables technique but first we need an equation for Y which is the transformed solution. This becomes:
Y = H1(s1)*X(s1) + H2(s1,s2)*X(s1)*X(s2) + H3(s1,s2,s3)*X(s1)X(s2)X(s3)
For the H3 part, (note H2=0 in our case so can be ignored) first associate s1 and s2 and leave s3 as constant. Set s1=(s-p) and s2=p. (see eq.7 of attached pdf)
All the calculations were done in Maple and an example calculation for A1=2 is shown. A1=2 was chosen as this gives all real coefficients in the partial fractions H1 and hence a simpler solution (see eq.4 of attached pdf).
Once s1 and s2 have been associated you are left with an answer in 's ' and ''s3'. Then set s=s-p and s3=p and the final answer is a function in 's' only. Now add this to the H1 part and take inverse transforms as normal. (see eq.14 (ansY) for the final answer). The region of convergence for this solution is shown in the graph of pg.4 (see attached pdf) and the solution itself is shown in pg.5.
The solution is only valid within a region of convergence which is determined by the constant a^2 of the original equation (in the example attached it is between -1..+1). The region of convergence can be extended by taking higher order terms above H3. I suspect it could also be extended by changing the contour of integration but I haven't figured this out yet.
I have only shown one technique of association of variables. There are other techniques that could be used. Some are shown in the referenced below.
[1] PROCEEDINGS OF THE IEEE, VOL. 59, NO. 12. DECEMBER 1971
The Output Propertieso f Volterra Systems (Nonlinear Systems with Memory) Driven by Harmonic and Gaussian Inputs BEDROSIAN, RICE,
[2] Volterra Series Representation of Noni inear Systems R. H. FLAKE, JANUARY 1963, pg. 330
[3] LUBBOCK, j . K., and BANSAL, V. S.: 'Multidimensional Laplace transforms for solution of nonlinear equations', Proc. IEE, 1969, 116,
pp. 2075-2082
[4] GEORGE, D. A.: 'Continuous nonlinear system'. Massachussetts
Institute of Technology Electronics laboratory report 355, 1959
[5] Solution of Nonlinear differential equations by volterra series KARMAKAR