I got confused when I plotted the graph of -(x^2 - x)^(2/3). the graph shows the function achieves its maxima at x =0 and x =1 but when we follow the procedure of derivatives then we get x = 0.5. Please help me in this.
the theorem says: IF the derivative exists in a nbhd and changes sign from + to - when x crosses x_o while passing from the left to the right, THEN x_o is position where the function attains local maximum.
The theorem does NOT state that the derivative must exist at the point where the maximum/mininimum is attained, which happens to your example, also see e.g. f(x) = |x| at x_o=0. For more detailed lecture compare this textbook
"how can we find the absolute maxima /minima value without plotting the function?"
Good question! A simple answer: make some calculations at suitably chosen points close to the "suspected" points. This can be done by a program. For much more complicated functions - special research of suitable approximation might be unavoidable :-( The good new is that then some new mathematics can appear :-)
Sure - in convex non-differentiable optimisation, it happen most of the time!
The discussion above - by the way - has a few bugs.
There is a rather robust theory on non-differentiable optimisation that you can have much use of. Marko M. Mäkelä has written a few books on the discipline.
These are two examples that max or min don't have derivative. The derivative may not lead to max or min. As fellows said, in the points that the first derivative is zero and the second derivative sign changes.
Vishal kumar Pandey : how do you define this function ou are so interested in? E.g., what is -(x^2-x)^(2/3) at x=1/2, where you have -(-1/4)^(2/3)? Is it -[(-1/4)^2]^(1/3) or what?
f(x) = -{[x^2-x]^2}^{1/3}, then this function is always non-positive. Therefore, any point where f(x)=0 is automatically a point of global maximum. This happens to be at x=0 and x=1. Whether the function is differentiable there or not is completely irrelevant, as the conditions you keep referring to (stationarity, gradient of f vanishes) are only **necessary** for local optimality, when f is differentiable at a point of local minimum or maximum.
Actually an even easier solution to this is to accept that the function g(y)= -y^{1/3} is monotonically decreasing on the set {y>=0}. Therefore, finding the maximum of f(x)=g(h(x)), is equivalent with finding the minimum of h(x), where h(x) = [x^2-x]^2. This way you get rid of what makes your problem non-differentiable and can get back to the nice stationarity criteria.
Anton Evgrafov thanks for the explanation. after your question -(-1/4)^(2/3) or -[(-1/4)^2]^(1/3), I understood that I have to think logically and treat the function -(x^2 - x)^(2/3) as f(x) = -{[x^2-x]^2}^{1/3}. because here [x^2-x]^2 gives always a positive value. but what will happen if we take f(x) = -{[x^2-x]^{1/3}}^2. will we get the same result?
Vishal kumar Pandey : This is why I was asking how you define the function. For example, if you take the definition f(x) = -exp{2/3 * log(x^2-x)} then, with the usual analytical extension of the logarithm for negative numbers, you would get into the complex number territory, which is not particularly suited for computing maxima/minima owing to the lack of natural ordering.
So the answer to you last question really depends on how you define/understand y^{1/3}: as the inverse function for x^3, which then makes sense for all real numbers, or as exp{log(y)/3}, which is undefined for y=0 and is complex for y
I think we need to explain the issue in the simple case: If you need to find the absolute extreme of a continuous function f on a closed interval [a,b] , you need to find all critical numbers of f and then calculate f(c) for each critical number c and also find f(a) and f(b) , then finally compare all of them to find the largest and the smallest numbers. Your function has three critical numbers: 0, 0.5 , 1 so I am wondering why you does not define the function on a closed interval.
Hussein Ghlaio it was my mistake, I should have added the interval also. in the question, [-2,4] interval was given. due to excitement, I forgot to add it in the question. if we take the derivative of the function then we get -2(2x-1)/[3(x^2-x)^(1/3)]
from here I got only 1 critical point which is x =0.5. How did you find x = 0 and 1 points?
Let me make a short summary on the example by Vishal kumar Pandey:
We may focus our attention on the interval (-epsilon, 1+epsilon) for some epsilon >0, as a domain of f(x), or even [0,1] (the natural domain of f is R). As Joachim Domsta rightly noticed, if f(x) is differentiable in the vicinity of x0, change of sign of f'(x) at x0 is sufficient for f(x) to have a local maximum or minimum there. Subsequently, by the celebrated argument of Fermat, f'(x0)=0. We do not have this situation in the example discussed here. At x0=0 or x0=1, we do not have differentiability of f at all. Left and right derivatives are +infty and -infty at 0, and similarly at 1. But in these two bad points we have a (global) maximum. No contradiction with what I wrote above.
There is something deeper under the skin: A derivative function f'(x) defined in (a,b) has the intermediate value property (called also the Darboux property), i.e., image of a sub-interval in (a,b) is an interval. This is why f'(x0)=0, when f'(x) changes sign at x0. It is interesting to observe how close is the Darboux property to the Fermat's argument.
Even though the example by Vishal kumar Pandey is within the scope of elementary calculus, it has a significant didactic value. Using it, we can discuss "anomalies" of functions with our introductory math analysis students.
Roman Sznajder : what Joachim Domsta wrote (i.e., if the derivative exists in a neighbourhood of x0, excluding x0 itself, and changes sign from + to - at x0 then x0 is the point of local maximum) is obviously not true without assuming that the function is at least upper semicontinuous at x0 - in the same way as the derivative, if we allow it to assume "infinite" values (as may be pertinent when discussing functions with non-differentiable points) is not necessarily Darboux, if the original function is not. Luckily, the example in the discussion is continuous, but I think we should list the underlying assumptions to avoid further misunderstandings beyond what is clearly already going on.
Anton Evgrafov: Nowhere Joachim Domsta wrote the phrase "excluding x0 itself". To the contrary, from his writing it is clear that the derivative is defined in a full (not punctured) neighborhood of x0, and it obviously implies that function f is continuous, so it is upper semicontinuous in a neighborhood of x0.
Simple applicability of the Darboux property is here: Take a monotone function with one jump on the interval [a,b]. Show that it is not a derivative of any function defined on [a,b].
Roman Sznajder : Of course you are right. I got confused by Joachim Domsta statement "The theorem does NOT state that the derivative must exist at the point where the maximum/mininimum is attained" , which is obviously separate from "IF the derivative exists in a nbhd...". My apologies.