Suppose that f and f′ are continuous functions on R.
If f's limit is zero at infinity, does that imply f' has same limit at infinity?
The zero limit has no effect on the derivative. Take f(t):= \sin(t^2).t^{-1}. Then f'(t)=2\cos(t^2} - sin(t^2}.t^{-2} and the value of f' is 1 at any point t={2\Pi n}^{1/2} with a natural n.
You are right Martin, although your function and its derivative are not continuous on IR. it does not change anything! Perhaps it is necessary to assume f' of constant sign in the Neighborhood of infinity so that $\int_a^\infty f'(t) dt$ converges implies, f' tends to zero.
Well, the function is not continuous at zero, but it doesn't matter for its behavior at infinity (just replace it by an appropriate constant or such on an appropriate neighborhood of 0 so that f and f' are continuous). Constant sign on some (t,\infty) would help, of course, then f is monotonous on (t,\infty) and you are fine.
The answer by Behnam Farid gives only an example of a function of bounded variation which possesses the derivative approachin zero. The right answer is that even bounded variation does not ensure the limit zero for the derivative! The following example is a function with continuous derivative and can be simply changed to a function wchich is even C^{\infty}:
EXAMPLE Let us assume that $ f(x) = (|n|+1)^{-2} \cos^2 ( (|n|+1)^2 x )$ whenever for some $n \in \Z := \{ . . . , -1, 0, 1, 2, 3 , . . . \}$
$x \in [ \pi n - \pi/ 2 (|n|+1)^2 , \pi n+ \pi / 2 (|n|+1)^2 ]$.
and let $f(x) = 0$ for all other values of $ x \in \R:= (- \infty, \infty)$.
Then the variation equals \sum_{n\in\Z} 2 (|n|+1)^{-2} < \infty and the derivative does not approaches 0 (since it equals 1 or -1 at each $x$ of the form
$ x= \pi n -/+ \pi / 4 (|n|+1)^2$
Regards
Let g be the odd piecewise linear function vanishing outside intervals of length 2/n2 centered at integers n, assume that g takes the value 1 at each positive integer. Then, if f is its indefinite integral and f(0)= - \sum_{n>0} 1/n2 , applying the fundamental theorem of calculus, we get that f vanishes at infinity. In this way we have that both f and f´ are continuous on R.
In order to set straightsome of the remarks by other RGaters, I have decided to prepare a rough LaTeX complitation of the example of my first answer (see the attachment).
Perhaps the convention of my notation is unusual there. Therefore on the next half-page I have put a more readable formulation. Moreover, a drawing of the graph is sketched at the bottom. Hopefully, this explains why the function f is of bounded variation, possesses continuous derivative in the whole real line, possesses limit 0 at infinity, and that the limit of the derivative at infinity does not exist (in particular, it is not true that the limit of the derivative equals zero).
Note that the presented example is quite similar to Nacima Memi\'c's construction proving the same in much more simple words :-)
Best regards
Correction! A misprint appeared in the denominator in the pdf file: instead of -2 the power should be 2, what can be seen otherwise from the context, anyway.
Dear Behnam Farid ,
First of all, your last formula (2) is not my function. This is only a formula applied by me in different intervals (numbered by n ) with different a= (n+1)2 (if n is positive), all for one function. The same relates to the proper example by Nacima Memić. OUR EXAMPLES ARE NOT PERIODIC.
Since your arguments do not refer to the multi-part-formula defining ONE function, i.e. since you are considering many functions with dfferent n instead, we cannot reach common point. Simply, we are discussing different objects.
Sincerely, Joachim Domsta
Domsta is right. His function is once continously differentiable at the boundary points, because it goes to zero quadratically there. Moreover, it is of bounded variation. The variation is a well-defined concept (no private definitions here) and his expressions show that it is bounded. (Whether he missed a constant factor or not would be unimportant in that context.)
Anyway, here is a closed-form example:
f(x) = sin x exp(-x2 sin2 x)
f'(x) = (cos x - 2 x sin3 x - 2 x2 sin2 x cos x) exp(-x2 sin2 x)
First, we note that f(π n) = 0 for n integer. Moreover, we have, for x0∈(0,π):
|f(x0+π n)|
@ Benham Farid
Your arguments about my function are false:
1. the derivative of f exists at each point and is a continuous function; at the boundaries of the intervals where the function is positive, the derivative equals zero, for instance: from the right at \pi n - \frac{\pi}{ 2 (|n| +1)^2 it equals sin(\pi) = 0; and from the left it is zero as a derivative of the zero function.
So don't mislead other RGaters about claiming, that it f is a step functions, please.
2. The bounded variation over entire real line R is defined as a supremum of all possible sums of the absolute values of changes |f( x_{j-1} - f( x_j) | , j=1.2., , , n, over arbitrary choice of increasing sequences -\infty < x_0 < x_1 < . . . . < x_n < \infty, over all natural n. For my piecewise monotone function it equals exactly the indicated sum of increases and decreases, which within the n-th interval are equal 2 / (|n|+1)^2, and obviously they form a convergent series. Perhaps your definition of bounded variation over R is different, so write it down, please. Question: does your function fulfill your definition of bounded variation?
3. The following remark from your answer:
'for increasing values of |x|, the regions over which your function is not identically vanishing and maintains some analytic coherence diminish in size (for x near n the size of the region diminishes like π/(|n|2 +1)), precluding use of the analytic procedures applicable to classical functions in the unbounded region x → ∞, or x → -∞.'
does not meet the standard understanding of the limit precedure:
[lim f(x) = 0 as x \to infty] :\iff [\forall a>0 \thereexist A \forall x>A : |f(x)| < a ].
4. You cannot suggest me reading textbook about my example if the example is not defined there. Perhaps it is? If yes - then indicate a link to a pdf file and the suitable page number, please.
@ Benham Farid
Your explanations are not acceptable, since they are not strict enough. First, you didn't read correctly the proposed definition of the particular function at least twice; you have named it stepfunction, which is not true in the usual understanding of this notion; now you are objecting the example as not possessing higher derivatives, which is not subject of the problem; your definition of unbounded variation usefull for understanding your claim
"With reference to the above discussions, let us now restrict ourselves to the class of functions f(x) that are of bounded variation for x → ∞. For this class of functions, the answer to the question is yes."
(which I have opposed as false) is still not provided. Thus your claims cannot be discussed seriously untill the notions are strictly fixed. Changing ad hoc the range of meaning of the notions and changing the subject of discussion (e.g. entering unnecessarilly the domain to complex numbers) leads the discussion too far away from the core of the problem. Obviously, you can choose any system of notions, but if it is incidentally incomparable to the usual understanding of their sense, then you cannot suggest strongly as at your second answer - to remove someone's answer, instead of waiting for the response period
Is very easy to see that the answer is not. Let me take the function sin x when x tends to zero whose limit is zero. The limit of its derivative is cos x which limit in zero corresponds to one. Thus they are different.
A very different thing is if you consider the limit of cocient of functions whose limit is indetermined 0/0 or infinite divided by infinite. In such a case the L'Hopital rule tells you that you can substitute the functions by their derivatives (which obviously need to be different to zero or infinite).
I hope it helps in discussion.
Sorry for my bad english. I have added t to no which changes the meaning of my first sentence. What I wanted to write is that the answer is no. Notice that if this were yes then the L'Hopital rule of calculating limits would fail because to calculate derivatives of the functions would be for nothing.
@Farid:
Both the functions that I gave and the one that Domsta gave are of bounded variation according to the standard definition of bounded variation that you may find, for example in https://en.wikipedia.org/wiki/Bounded_variation. It is not normally defined via the derivative, because we want to be able to calculate the variation of functions that don't have a derivative as well. The derivatives of our functions are obviously not of bounded variation, but the functions themselves are. And they satisfy the assertions we make about them, i.e. f(x) goes to zero for x→∞ , while f'(x) does not.
Of course, neither sin x nor sin (x2) are of bounded variation on an interval [x0,∞), but these are not the functions we were discussing.
Let us put another example less abstract that the L'Hopital rule to calculate indefinite limits that every physicist can understand easily. The electric potential (or gravitational) is constant (whose value is arbitrary) in any region for which the electric (gravitational) field vanishes, for instance, inside a conducting cilinder that you can imagine to go to infinite.
@Farid: "for a differentiable function f(x) of the real variable x that has limit 0 for x →∞, absence of any function of unbounded variation over x →∞ in the expression for f(x) is sufficient for f'(x) to have the limit 0 for x →∞."
And that is a meaningless statement. More or less a statement about magic. What does absence of a function of unbounded variation over x →∞ in the expression for f(x) mean? So g(x)=x must not appear itself in the expression for f(x)? Note that x is of unbounded variation over x →∞.
(Whereas arguably no such function appears in Joachim Domsta's expression, because he may always claim that cosine expressions which would be of unbounded variation over x →∞, do not appear in the definition of the function, because they are used only on finite intervals, so the actual functions appearing are products of the cosines with rectangular functions that are zero outside finite intervals, and these products are not of unbounded variation.)
Anyway, to explain a bit more my last post. You said in an earlier post: "If one considers the total variation of a function over an interval as being equal to the integral of the absolute value of the derivative of that function over that interval (which is the standard definition of total variation for differentiable functions in every respectable text -- it is used by Hardy and Littlewood, by Daniell, and by Von Neumann, to name but three authorities*)," and I answered with a reference to the definition of the notion of bounded variation on Wikipedia. This was not to mean that it is wrong to calculate the variation from the integral of |f'(x)|, it was only to indicate that this integral is not a useful definition of the total variation (and I don't think it is used as a definition in the books you indicate). If your function is piecewise differentiable, you may use that formula to do the calculation. But the concept of bounded variation was introduced to discuss integrability of functions on closed (or at least finite) intervals. For a continuous or differentiable function, the answer is immediate -- it is always of bounded variation on a closed interval. Therefore, the definition of bounded variation has to deal with functions that are not necessarily (piecewise) differentiable. (On infinite intervals, bounded variation does not guarantee integrability, a constant different from zero has zero variation, but is not integrable on the infinite interval. So your "definition" is not useful on infinite intervals either.)
But my preceding remark that there is no room here for private definitions of bounded variation was not directed at this particular definition. For if you apply it correctly, you will see that Domsta's function is of bounded variation and hence a counterexample to your claim. He gives the calculation of its variation in his pdf file and by applying your formula, you can verify it to be correct. What I meant is that if you don't get this, either you use a private definition of the total variation of the function, which means you are talking about a different thing than Domsta, or you simply do the calculation incorrectly.
Now, I took your claim to be: If limx→∞ f(x)=0 and f(x) is of bounded variation on the interval [a,∞), then limx→∞ f'(x)=0, too.
This claim has been shown to be wrong by both Domsta's and my functions and you should simply admit to have been in error or else to not have understood his or/and my arguments.
If your claim is something like: If limx→∞ f(x)=0 and the definition of f(x) does not contain a function of unbounded variation on the interval [a,∞), then limx→∞ f'(x)=0, too, then the claim is correct but useless, because the definition of your function cannot contain x, which is of unbounded variation on the mentioned interval, and hence the only functions, to which your "theorem" applies are constants. If f(x) is a constant, its limit can only be zero if the constant is zero, and then, of course, f'(x) is zero as well, as is its limit. So your claim applies to a single function, f(x)=0, because all other functions satisfying the limit condition must be expressed via another function (viz., g(x)=x) that is of unbounded variation on the interval [a,∞).
@Farid: I have now read your first post on the first page as paginated by RG. Most of what you say there is right, except the statement about the answer being yes if f(x) is of bounded variation. This is what I thought your original statement was, and it has been demonstrated to be wrong by both Domsta's and my examples.
Your arguments about asymptotic behavior of functions hold for functions that can be expanded in asymptotic series. This is not the case for all functions. I am pretty sure that the function I gave cannot be expanded in an asymptotic series about x=∞ (i.e. in a series in powers of 1/x), and I don't think that Domsta's example can be expanded that way either.
I really don't want to discuss the general case of bounded variation anymore, because I think that is settled.
What I find interesting, however, is that in fact, it is possible to construct even monotonous functions satisfying limx→∞ f(x)=0, while their derivative does not approach zero for x→∞.
Here is how.
First, consider the set of functions hn(x)={1+ cos(n(n+1) x) for |x|0, because h-n(x)=hn-1(x) [ n(n+1)=(n+1/2)²-1/4 is symmetric w.r.t. n=-1/2].
Let us note a few properties of hn(x). We have hn(0)=2, hn(±π/n(n+1))=0, hn(x)≥0. Moreover, h'n(x)={n(n+1) sin(n(n+1) x) for |x|
@Farid:
I think what is important is what you actually said, not what you meant to say.
The original question is a very general one, its scope is not restricted to functions that have a (Poincaré type) asymptotic expansion. You made a statement about a sufficient condition that makes sure that f '(x) goes to zero, if f(x) does, for x→∞. Your statement itself does not contain the restriction to functions possessing an asymptotic expansion. You may have meant something that is correct, but you should accept that the three examples given show your formulation to have been incorrect. Also, it is a much more interesting fact that continuously differentiable functions of bounded variation exist that have f(x)→0 (x→∞) and for which f '(x)→0 (x→∞) does not hold than that there is a very restricted class of functions (those that are expandable in positive powers of 1/x) for which this can be proven. I can prove a much more general statement that I will give in another post.
@Baldomir:
"Is very easy to see that the answer is not. Let me take the function sin x when x tends to zero whose limit is zero. The limit of its derivative is cos x which limit in zero corresponds to one. Thus they are different."
This is too simplistic. The point x=∞ is different from finite points. While at any finite x, you can impose, besides f(x)=0, any value that you want for the derivative, things are not so simple at x=∞. This is most easily seen using the transformation that I suggested.
Set f(x)=g(t), t= 1/x; then f '(x) = -t2 g '(t).
Now assume that f '(x) does have a finite limit a for x→∞: limx→∞ f '(x) = a ≠ 0.
This implies that g '(t) ~ -a/t2 (t→0). This asymptotic relation can be integrated and the constant of integration is subdominant, hence
g(t) ~ a/t (t→0) ⇒ f(x) ~ a x (x→∞)
So, f(x) must diverge linearly in x, if its derivative goes to a constant different from zero at infinity.
Therefore, I think one may state the following theorem:
If f(x) is differentiable for x∈R, x>x0 and limx→∞ f(x)=0, then either limx→∞ f'(x)=0, too, or f '(x) does not have a limit for x→∞.
An alternative formulation would be:
If f(x) is differentiable for x∈R, x>x0 and limx→∞ f(x)=0 and limx→∞ f '(x) exists, then that limit is necessarily equal to zero, too.
Dear K.Kassner,
could you please set the two adjusted formulations (at the end of your last reply) in bold? Like this they will not be lost between long texts in this thread. Thanks a lot.
@Farid:
The problem is not one where one should look for analytic solutions, i.e. solutions in the complex plane with arbitrarily many derivatives. It is a problem that was posed for real functions. Moreover, it is obvious that you will not find regular functions with all derivatives existing in a full neighbourhood of z=∞ that go to zero at infinity without their derivatives doing so, too. So some singular behavior is necessary in examples.
Of course, there is no need explaining me my own examples. I understand them very well.
@Nacima Memi\'c
The following statement in one of my former answers:
"Note that the presented example is quite similar to Nacima Memi\'c's construction proving the same in much more simple words :-)"
is formulated in different way than I wanted (please, accept my apology). Instead, I evaluate your example as simpler than my and even better for the answer to the posted question. This is the reason:
- it is CONTINUOUSLY DIFFERENTIABLE on the whole real line, decreasing on the negative half-line and incresing on the positive (thus: it is OF BOUNDED VARIATION on the whole real line)
- it POSSESSES LIMIT ZERO AT BOTH ENDS (at $+\infty$ and at $-\infty$)
- its DERIVATIVE DOES NOT APPROACH ANY LIMIT (in particular it does not approach zero, which would be necessary if the limit existed - comp. the proposition by K.Kassner pointed at by Viera Cernanova in her last post).
Obviously every of the given examples can be modified to a function of class C^{\infty} by small changes at the breaking points (e.g. with suitably transformed function equal $e^{-x^{-1}}$ for positive $x$, extended by $0$ for $ x\le 0$).
Best regards
@Farid:
"The discussion of the complex plane is relevant insofar as it shows that in considering the functions that you have introduced, they have branch cuts all over the real axis."
Now you are talking nonsense. The function
f(x) = sin x exp(-x2 sin2 x)
is analytic on the whole real axis, excluding the point infinity, where it does not have a branch cut but an essential singularity.
Moreover, don't you realize that the position of a branch cut is not fixed except at its ends? We consider a problem on the real axis. We may or may not choose to extend it to the complex plane, if appropriate analytic functions are available.
In the case of the function given above, extension to the complex plane is possible inside a strip around the real axis of arbitrary width. But that does not have to affect the discussion of the behavior of its limit and that of its derivateive for x→∞, as long as we are interested only in the real case (in the complex plane we would have to restrict the path along which infinity is approached to an appropriate wedge of validity). Of course, it may sometimes be simpler to use complex analysis even to prove relationships on the real line, but this obviously is not the case here due to the essential singularity at infinity.
Second, the concept of bounded variation that you were so fond of was initiated not in the discussion of complex functions but in the discussion of real functions. It was needed for functions that were not continuous but still satisfied some criterion guaranteeing (Riemann-Stieltjes) integrability. Normally, the concept does not arise in the discussion of complex functions where much stronger restrictions on smoothness are automatic by analyticity properties.
I have contemplated a bit about the question whether we can modify Farid's original statement "With reference to the above discussions, let us now restrict ourselves to the class of functions f(x) that are of bounded variation for x → ∞. For this class of functions, the answer to the question is yes.", which is incorrect, into something correct. (Note that in the post, he does not refer to functions having an asymptotic expansion in the complex plane, although he later says "To leading order the most general form this function can take for x approaching ∞ is C/[xα (ln(x))β]," another wrong statement, which however shows that he was probably thinking of functions having an asymptotic expansion in the complex plane.)
Here is my conjecture:
If f(x) is differentiable for x∈R, x>x0 and limx→∞ f(x)=0 and f '(x) is of bounded variation on the interval [x0,∞), then limx→∞ f '(x)=0, too.
So if bounded variation is required for f '(x) instead of for f(x), we have a valid theorem. Note that in none of the four examples with bounded variation of f(x) given so far (including the one by Nacima Memić that I had overlooked) f '(x) is of bounded variation.
And this is a sketch of a proof: Any function of bounded variation can be written as the difference of two monotonously increasing functions that are bounded. This holds for arbitrary finite intervals, but I suspect it to be true for an interval [x0,∞) as well. (Two functions satisfying this can be constructed by increasing one with the considered function on its ascending intervals and keeping it constant otherwise while increasing the other one as the negative of the considered function on its decreasing intervals and keeping it constant on the ascending intervals, so if the considered function is of bounded variation, the two functions must be bounded, whether the interval is infinite or not.)
Now monotonous bounded functions on [x0,∞) have a limit for x→∞. But then f '(x), their difference, has a limit, too. But if f '(x) has a limit, this cannot be different from zero, as I showed before. Therefore, if f '(x) is of bounded variation and limx→∞ f(x)=0, we must also have limx→∞ f'(x)=0, q.e.d.
This argument can certainly be made more rigorous, but I believe it to be correct.
Dear K. Kassner
Obviously as you have correctly stated, the function
f(x) = sin x exp(-x2 sin2 x)
is analytic on the whole complex plane. Itst restriction to the real axis possesses all properties required for the counterexample to Farid's incorrect statement. Therefore, no further discussion on any branches' cuts which could be, or could not not removed from nowhere to anywhere or conversely is needed. Very nice example.
Best regards, Joachim Domsta
@Farid:
I am not calling everything nonsense. But what you said was:
"The discussion of the complex plane is relevant insofar as it shows that in considering the functions that you have introduced, they have branch cuts all over the real axis."
Note that you were talking of "functions". I introduced two functions into this discussion, one that has no branch cuts on the real axis and is analytic in the whole complex plane (usually the point at infinity is not counted in, so one can say this), and another one, which is not even analytic on the real axis, because it is only twice continously differentiable. Since you are talking about both functions and not just one, your statement is clearly wrong. You know, when discussing mathematical questions, it is important to be precise. It does not suffice that maybe you meant only the second function. I did not say that you were meaning nonsense, I only said you were talking nonsense. And that is the plain truth.
Now regarding the second function, which when taken as the real part of a complex function and thus extended to the complex plane (it must have a non-vanishing imaginary part on the real axis, because the real part alone is not analytic) will indeed develop branch points at the junction points of the piecewise definition. But that is not a crucial point. These branch points can be removed by making the function smoother at the joints. In fact, it is possible to get it infinitely often differentiable by appropriate deformation near the junction points. This does not mean that all its singularities go away. The branch points will get replaced by essential singularities which are much weaker singularities along the real axis but much more ferocious off it. This again shows that it may not be a good idea in this case to consider complex functions when one is searching for an answer to a question on real functions only. Differentiability for real functions is a much less restrictive property than for complex ones. You can have real functions that are once continuously differentiable but whose second derivative does not exist anywhere. This is impossible in the complex case. Differentiability on an open set implies that derivatives of arbitrary order exist.
Similarly, your statement about "the most general case" of a function being asymptotically of a certain form holds only for a very restricted class of functions (probably they are of measure zero on the space of all real functions). I am teaching asymptotic analysis, so I know what I am talking about. The problem is that we are to guess the precise conditions under which your statements hold, because you don't reveal them from the very beginning but only when it is pointed out to you that they are not as general as you claimed them to be.
I have to retract my previous example
f1(x) = sin x exp(-x2 sin2 x)
and replace it with
f2(x) = sin x exp(-x4 sin2 x).
Of the three requirements
a) limx→∞ f(x) = 0
b) limx→∞ f'(x) does not exist
c) f(x) is of bounded variation on [x0,∞)
f1(x) satisfies a) and b) but not c), while f2(x) satisfies all three of them.
What I overlooked in my analysis of f1(x) was that, for large x, it does not have just one extremum in each interval between two zeros (of length π) but three. (It would have been useful to plot the function on some interval containing several zeros. Then I would have seen this immediately.)
Now the strongest extrema of f1(x) scale as 1/x for large x (this can be easily seen from the formula by setting x= π n +a/x), so the total variation goes to infinity logarithmically. On the other hand, f2(x) has a similar structure and also three extrema in each interval between two zeros. But now the extrema largest in absolute value scale as 1/x2, keeping the total variation finite. In fact, we can define
fn(x) = sin x exp(-x2n sin2 x)
and the function will be of bounded variation for all integers n>1 on the real axis. All of these functions are analytic in the entire complex plane (excluding z=∞). Since they have an essential singularity at z=∞, they will of course not be of bounded variation along arbitrary paths in the complex planes approaching infinity. An analytic function approaches any complex value arbitrarily close in any open neighbourhood of an essential singularity, so there must exist paths, where the variation becomes infinite. (But not on the real axis.)
If I have time, I will give a more detailed discussion of how to estimate the variation of the functions given here in a pdf file, into which I can get formulas more easily and in a nicer form than by typing them into this window.
@Farid:
Look, you are always producing long answers that do not really discuss the issue. Instead you refer to your previous long posts. That does not make sense. I think you are wrong in stating you have always been right all along. I have pointed out several of your erroneous statements. And your attempts to correct yourself or put things into proper perspective have not clarified things. In fact, I do not know what finally is your proposition.
So why don't you, instead of referring to many previous posts that don't prove anything, give what you believe to be the correct proposition that we have attacked -- erroneously according to you -- in concise form, i.e. in two to three sentences, including all premises and the conclusion? Then we can see whether there remains a correct core or if it all was nonsense. As I said, precision is important in mathematics.
I have given three propositions in very short form (boldface in my previous posts). This is something you may refute or may not refute. But they are at least clear.
I have also erroneously stated a particular function (f1(x) in my preceding post) to be of bounded variation. If you had recognized it and pointed it out to me, I would have acknowledged this. Now it happens that I have recognized it before you and I could give a correction.
Why is it so difficult for you to admit that the statement "For a differentiable function f(x) of the real variable x that has limit 0 for x →∞, absence of any function of unbounded variation over x →∞ in the expression for f(x) is sufficient for f'(x) to have the limit 0 for x →∞." is simply wrong? (And even nonsensical if you consider that g(x)=x is a function that is unbounded on the domain [x0,∞).)
Dear Behnam Farid,
Let us close our dispute as scientist by the following jointly agreed merital statements, which till now were not confirmed explicitly by us as opponents .
First, that some of your examples of functions, call them f, presented in your answers, especially in the first one, satisfy all of the following four conditions:
(i) f is continuously differentiable in [0, \infty);
(ii) f is of bounded variation over the whole domain;
(iii) lim f(x) = 0 as x \to \infty;
(iv) lim f'(x) =0 as x \to \infty.
Secondly, that the examples presented by me, Nacima Maci\'c and some other users of RG satisfy only three of the above conditions, namely (i - iii) and do not fulfil (iv), which proves that conjunction of (i), (ii) and (iii) is not sufficient for (iv).
I hope you can agree that this will finish the interesting exchange of meanings (though not always extremely polite), in good spirit.
Thanks in advance,
Sincerely yours, Joachim Domsta
Dear Nacima Memi\'c,
I am very very sorry for misspeling your name in my last answer. Hopefully, you can forgive me this tremendous error.
Best regards,
Joachim Domsta
Dear Behnam Farid,
This means that the discussion is over, since you could not agree with two obvious facts extracted from the discussion for getting at least something what we commonly can agree. Every infinitely many times differentiable function is obviously continuously differentiable, i.e. satisfies condition (i) of my last answer. Since you do not confirm it, taking more care about whom belongs the condition (by saying that the condition is not your), means that you don't want to attain any scientific consensus. Let this be seen by the young scientist you are referring to! The same with the second fact. Mathematically it is the core of the dispute you started with evaluating my example as incorrect (as not getting into your class...) without understanding it correctly, what you have confirmed later on. Frankly, your apology was concerned with the fact that you misunderstand the example, not that you misevaluate it, which would be much more important for scientific argumentation. But you don't want to accept, that you were wrong stating that boundedness of the variation of the function is sufficient for the zero value the limit of its first derivative. Sorry, sorry, sorry.
Despite this bitter impressions provided by you, I wish you getting closer to the appropriate way of spreading scientific knowledge,
Joachim Domsta
Dear Behnham,
in your your sentence:
Already at the very beginning, you chose to neglect the fact that the condition of 'bounded variation' was only one of my conditions, that the other was that of perfect regularity in the open interval [x0,∞).
you are wrong, since that was you who wrote:
With reference to the above discussions, let us now restrict ourselves to the class of functions f(x) that are of bounded variation for x → ∞. For this class of functions, the answer to the question is yes.
which means that you were claiming that the functions of bounded variation, to which you have restricted the statement will have the value zero for limit of its derivative whenever the function itself approaches zero at infinity. Since you did not assume other additional properties in writing, I had right to claim your proposition as incorrect, which was confirmed by precisely formulated example (as well as some other examples which contradicted your proposition in the form as it is seen).
By the way: accept, please, that contradicting this particular proposition I have in no way expressing anything about your interesting examples; and I didn't upvote them hoping that the claim would be corrected by you. Istead, you have suggested me to withdraw my correct statements. This could make me highly uncontent, couldn't it?
Now I think we have reached finally the source of the misunderstanding.
Regards
Behnam,
So what? No acceptance to mathematical arguments? Let it be.
Yours
For those who are interested, I attach a detailed discussion of the properties of the analytic functions I have given to show that there exist regular functions of bounded variation having the limit zero for x→ ∞ but whose derivative does not have this limit. I might also add that I have given several useful one-line answers to the one-line question posed before (in the form of short propositions).
As @Martin Křepela ·said
The zero limit has no effect on the derivative. The text of @KLAUS KASSNER, Limiting behaviour of a class of analytic functions for large arguments
on the real axis, yields answers to the questions of this type.
Using fundamental theorem of calculus
f(x)= int_0^x f '(t) dt ,
we can generalize @Křepela example. Roughly, (i) we can draw a continuous function g to be positive on (n,n+1) for n odd and negative for n even (denote by g+ and g- the positve and negative part of g) such the area bounded by g+ and x-axis canceled the area bounded by g- . and (ii) the integral An of g over [n,n+1] tends 0. If we set f ' =g, then f has zero limit.
This is an important question in the context of stability analysis of nonlinear systems.
In general, "the function tends to zero as time tends to infinite" does not necessarily imply that it also stays zero for ever, as it can leave zero, go through different values and then keep coming back and leaving zero for ever end ever. (Any limit cycle is an example, where each point is a limit point). In such a case, as the function reaches zero to leave it, the derivative (the velocity) is not zero.
So, one must be able to prove that the function not only ends f=0, but rathet at f identical zero.
So, things are different when you know that the function reaches zero to stay zero (or any constant value), when the meaning is exactly that f ends being identical zero.
Take for example the so-called "counterexample" suggested here by Martin Krepela (Nevertheless, Martin only brought us the example, so the following has nothing to do with Martin in particular, as this is a classical and widely used example of so-called "counterexamples"):
"The zero limit has no effect on the derivative. Take f(t):= sin(t2)/t. Then f'(t)=2cos(t2} - sin(t2}/t2 and the value of f' is 1 at any point
t={2\Pi n}^{1/2} with a natural n. "
This is an example, which tends to show that our common sense is not worth much and, although a function may reach a constant position, its derivative may keep moving up and down. As I am first of all an engineer, this gave me problems as my simple mind could not accept the idea that my robot may reach and stay at a perfectly constant position, yet its velocity (i.e, its derivative) may keep changing up and down.
So, the derivative formula above has a common mistake: it uses the product formula and thus makes explicit use of the derivative of sin(t2). This is perfectly correct... uh, except when t tends to infinity, when the so-called sine tends to become an infinitely dense bunch of spikes and it is as differentiable as the Dirichlet function.
Going back to the direct derivative formula for the function f(t), namely,
df(t)/dt=lim(h-->0){[f(t+h)-f(t)]/h} may also look problematic, unless we recall that, whenever the value of f(t) is not clear for any t=t0, then instead of simply writing f(t=t0), we write
f(t-->t0)=lim(t-->t0){f(t)}.
So, while the derivative formula above is correct for any "naive" value t-t0, for t-->oo the formula should be
df(t-->oo)/dt=lim(h-->0){lim(t-->oo){[f(t+h)-f(t)]}/h},
If used correctly, it ends giving the CORRECT derivative ultimate value, which is zero.
With best regards,
Itzhak
Maybe I don't really understand the example of Miodrag Mateljević, yet as far as I do understand, the simple sine function g=sin(wt) satisfies the rule for an appropriate w, yet neither g nor its integral f have a limit.
@ Muhammet Ali Okur
@ Martin Krepela
@Miodrag Mateljević
Interesting that none of you had any comment related to my response.
Muhammet: Despite many so-called "counterexamples," the answer is yes. If a function reaches zero to stay zero, its derivative also ends being zero.
First, the example of Miodrag uses a function which is positive for an interval and then goes symmetrically negative on next interval. etc. so the integral along both intervals ends being zero and this result remains the same for all successive intervals. However, this is not the function that you are asking for, because the integral for ever keeps going positive and then coming back to zero. This integral function is not a function which reaches the value of zero (or any other constant value) to stay there.
The customary counterexample brought here by Martin does reach the constant value to stay there, yet the computation of the derivative has one (customary :-) flaw: It uses a derivative formula that only holds for finite t and then extends the result as time tends to infinite. WRONG! Using the right derivative formula tells you that the derivative ultimately does tend to be zero as time teds to infinite.
Best to All,
Itzhak
@ Itzhack Barkana
The following definition of the limit of the derivative as t-> oo:
(*) df(t)/dt=lim(h-->0){lim(t-->oo){[f(t+h)-f(t)]}/h},
is not the one, which the counterexamples of this thread deal with. You call it correct, which you have right to do in math, unless it were occupied for another understanding. The only one commonly in use is the following:
the limit of the derivative as t-> oo equals (by definition)
(**) lim_{t-> oo} f'(t), where f'(t) = lim_{h->0} [f(t+h)) - f(t)]/h.
Your expression (*), even looks similar, as it is noticed by you and well known by all mathematicians is not equal (**). Thus you are not talking about the same notion. As an ingeneer scientist you might have not known the commonly used definition, but if you want to use the notion "limit of the derivative as t-> infinity" you should use the one which is commonly in use.
PS. When working with your definition it becomes obvious that the functions which possess finite limit at infinity, have your derivative limit at infinity equal 0. Even some unbounded functions have this property:
EXAMPLE:
If f(t) = t1/2, then 0 < [f(t+h) - f(t)]/h < max{ t-1/2, (t+h)-1/2}, for t, t+h >0 which goes to zero as t-> oo, whenever h is kept fixed. Hence, for your expression we have:
lim(h-->0){lim(t-->oo){[f(t+h)-f(t)]}/h} = lim(h-->0) {0} = 0.
Thus one should see that the problem as a question: If the function is small should its derivative (calculated before going with t-> oo) be small too? And the examples recommended many times above answer this doubt negatively: there are functions f satisfying simultaneously the following two requirements:
lim_{t->oo} f(t)=0 and despite this lim_{t->oo} f'(t) ≠ 0.
@ Joachim Domsta
First, thanks for your interest and for your reply.
Why ain't I surprised by your reply? Because, as I already wrote, it is the customary opinion and used to also be my own and it took quite a while for me before I even dared to think otherwise. .Of course, it was also the opinion of all my reviewers, before they were able to agree doing the very fine computations and change their well-established opinion.
So, the formula that you use is perfect at finite t, yet becomes simply wrong as t--oo. If you perform it at finite t, the result is perfect. HOWEVER, you should NOT use the result, which perfectly holds at finite t, to extend its conclusions to t-->oo. You should try to use your formula to directly compute the derivative for t-->oo and you would see that you CANNOT any more ignore terms in h (by definition h is arbitrarily small yet FINITE) such as h*t when t-->oo.
" Thus one should see that the problem as a question: If the function is small should its derivative (calculated before going with t-> oo) be small too "
NO! This is not the question! The small function can have high frequency oscillations that result in high derivatives. The question is what happens at the limit when the function reaches zero to stay zero!
Yes, even if the function is unlimited, the derivative can be zero. It only expresses the simple fact that such a function as ln(t) reaches infinite at zero slope, while such a function as exp(t) reaches infinite at infinite slope.
Even at t=0 (zero) one must be careful, or at any other value where the answer is not direct. You can compute (sin t)/t for any t, yet not for t-->0, where you must use limit. At t-->oo we must be muuuuch more careful, even if tradition taught us otherwise.
I have been forced to deal with these examples because of their effect on stability analysis of nonlinear differential equations. I hope you can rethink and even read some stuff, yet I have no intention to enter any argument here. I would only warn you that by using a formula, which is perfectly correct for finite numbers and extend it to infinity, one can "prove" that 1=2 or that a+b=c in any triangle.
(Try to) Have Fun!
Itzhak
@ Joachim
The writing messages do not pass the tone and I would not want to be misunderstood or to leave any impression of hostility.
If you can read any of my published results (quite a few now), where I, thoroughly and carefully, had to review and re-review many "well-established" concepts related to limits and derivatives, and if then you can show that I am wrong and where I am wrong, I will be happy to correct. After the long agony, corrections and re-corrections before they ended being published, I doubt, however, that you can do.
The "other" formula that you show and your other examples are not new for me. Instead, the fact that "your" derivative gives the wrong result as t-->oo is new for you. Just for the sake of simple curiosity, may be you can try it and, veeery carefully, see what you get as t-->oo.
@ Itzhack Barkana
If to someone's question like
Muhammet Ali Okur>: " Suppose that f and f′ are continuous functions on R. If f's limit is zero at infinity, does that imply f' has same limit at infinity?"
and to someone else kind explanation of the state of common understanding this question, like
Joachim Domsta>: "Thus one should see that the problem as a question: If the function is small should its derivative (calculated before going with t-> oo) be small too"
you answer is as follows
Itzhack Barkana>: "NO! This is not the question! The small function can have high frequency oscillations that result in high derivatives. The question is what happens at the limit when the function reaches zero to stay zero!"
then you aren't interested in further discussion at all.
No regards, Joachim Domsta
" As an engineer scientist you might have not known the commonly used definition, ...."
The only difference between arrogant people that call themselves mathematicians (and look down at "simple" engineers) and engineer scientists is that the later must not only know all the relevant Math, but must also put it to work, while the first can declare themselves satisfied with just the formulas.
I was answering other people's question and I was not writing to you before you wrote to me. And, naive me, I thought there can be a technical discussion here! I will never grow up!
I am always sorry to meet rudeness and I did my best trying to avoid this and maybe to make you use your "well-established" formula (same thing that I first did and checked it carefully) and see why it cannot give you the right result as t-->oo (as I did and ended being shocked to find it wrong).
I wrote: (Try to) Have Fun!
I really hope you have also things that you can enjoy.
In any case, here you are wrong and rudeness does not make you right.
@ Itzahack Barkana
You are trying to run away from my reasons. My advices were and remain to be not a subject of mathematics. They are the subject of the range of meaning of the notion "limit of the derivative at t->oo". Its main feature is not that it is well established in the sense like the truth about some hardly to read real facts. There is no question if the definition is or is not true. It is a common convention, that this is the feature expressed by condition (**). And it is not my defining condition; it is the exact meaning of combination of two notions: the derivative at a point t and limit of this value as t approaches infinity. And obviously, my remarks about the range of the definition fulfil your expectation:
Itzhack Barkana>: "I thought there can be a technical discussion here",
since I was showing you technicalities of wrong application of the definition. Let me try again, in order to avoid further accusation by you of being mea mathematitian (ISN'T THIS FUNNY?)
EXAMPLE 2. IF some temporarily blind person is asking "What colour is that car" (pointing at a black car staying which stays right to the person, which being convinced that there is a well established/accepted range of meaning of the notion "color"),
and IF you are answering that it is not a question about the color but about the number of seats, because you are fixed that this is the meaning of the notion "color",
and IF - due to this - you are (obviously consequently!) infering that the (right) answer "black" is wrong, despite the trials of other people to explain that your definition is different than the one commonly accepted,
THEN this means that you are atacking not mathematics, but the convention which is not disputable since it already exists .
The only step you could make correctly is to call/request at the community of interested people to change the commonly accepted range of the definition into another one, which (as according to you) is better fitting the common sense of the words in use. And you had a chance here at RG to do this in a way like:
"Dear All, let me suggest that the commonly accepted defintion of this and this is not readable. Thus let me suggest the following . . . If this were assumed, then the question might have a different answer etc.."
BUT NO, you have chosen accusation that the mathematicians are not right with their answers. You must hate them, even though the commonly accepted MATHEMATICAL definitions of limits and derivatives are due to one of the greatests PHYSICIST AND PHILOSOPHER IZZAK NEWTON. Thus instead of trying to make fun with senseless discussion occupying someones valuable time, you better try to understand their answer first in order to get the right perspective for opposing - IF they are apparantly wrong.
Joachim Domsta
PS1. Let me repeat:
Joachim Domsta>: "The following definition of the limit of the derivative as t-> oo:
(*) df(t)/dt=lim(h-->0){lim(t-->oo){[f(t+h)-f(t)]}/h},
is not the one, which the counterexamples of this thread deal with."
PS2. Your condition (*) possesses a formal error: the right hand side (RHS) is independent of t and the LHS depends on t. Didn't you want to have on LHS "df(oo)/dt" , or in a more consistent notation: "df(t)/dt|t=oo " ?
PS3. I can accept your new range of meaning of the words "the limit of the derivative as t-> oo", since even though hated mathematician, I am open to changes, but you cannot continue with senseless accusation of incorrectness of the till now presented counterexamples given in the wein of the "old fashioned" definition. This could improve our relations tremandously, up to giving you more complete answer and advices whether the being zero beginning from one instant is necessary for fulfilling your new definition.
Oh, sorry, Sir Isaac Newton.! I did not know it was you there!
No, Joachim, I do not hate mathematicians. Not only I respect them but I venerate their contributions, as for many years I have also been using them to make sure than real-world things will indeed work.
A small value function is not sufficient to guarantee zero derivative. A constant value function, even of large value, does.
If you could only stop for a moment to put your formula to work and see why it cannot work as t-->oo!!!!
Or if you would only agree to even save the effort by reading some material, instead of wasting your precious time arguing on "well-accepted" generalities!
As you seem to try to appeal to common sense, and also for anyone there who might be interested, having the position (i.e., the function) reach a constant value, while the velocity (i.e., the derivative) keeps changing, although its "mathematical proof" is "commonly" accepted, is simply pure nonsense, as your common sense indeed must tell you.
One must only take the pain to go back to basic definitions and see where and how they are abused.
BUT, as I have no interest to "win," I admit: You won!!!!!!
Black is black indeed!
What's next? Some examples from agriculture?
Better do something that you enjoy or take a well-deserved break.
@ Itzhack Barkana
I didn't win! And I was not supposed to win. I have just a hope that you will understand the difference between a particular definition [in this case the definition of the limit of a derivative as t-> oo] and the notion of definition. You are not giving up in getting this difference, so you are loosing.
A'propos checking the formula (**) limt->oo{limh->0 [f(t+h) - f(t)]/h} which I have to . . .
Itzhack Barkana>: " . . . see why it cannot work as t-->oo ".
This is an example how it works for f(t)=sin(t2) /t:
step 1. calculating f'(t)={limh->0 [f(t+h) - f(t)]/h} at arbitrary given t: by the chain rule we have
f'(t) = {.....}= 2 t cos(t2)/t - sin(t2)/[t2] for EVERY t not equal 0
step 2. calculating limt->oo{2 t cos(t2)/t - sin(t2)/[t2] }: by the Heine definition we have to check if the limit of the sequences of values at points of any sequence tn approaching infinity are all the same. For demonstration that it is not the case, let's take:
EXAMPLE nr 1: tn= [2 n \pi]1/2: its limit equals oo; f'(tn)={2 t cos(t2)/t - sin(t2)/[t2] }|t=tn= 2; this approaches 2 as n->oo.
EXAMPLE nr 2: tn= [(2 n -1) \pi]1/2: its limit equals oo; f'(tn)={2 t cos(t2)/t - sin(t2)/[t2] }|t=tn = -2; this approches -2 as n->oo.
Since the two limits are not equal, hence
(1st) the limit (**) does not exist
(2nd) in particular the limit is not equal zero, despite obvious fact that
(3rd) |f(tn)| is less than or equal 1/tn , which approaches 0 as tn approaches infinity.
We have shown: Threre exist at least one function f(t), with domain t>0, with limit value 0 as t approaches infinity, whose derivative is NOT approaching zero as t approaches infinity.
Final conclusion:
The answer to the thread question: " If f's limit is zero at infinity, does that imply f' has same limit at infinity?" is NO.
PS. I have added this for the case, if the meaning of the examples of the thread and the right answers were presented insufficiently complete.
You work so hard instead of reading something (and accepting, as I did, that might be things that even you may not know!)
You don't have to write so much. Step 1 contains all the nonsense!
How did you get the result of step 1? Did you use the product (or the ratio, if you feel better) rule? Did you differentiate sin(t2) and assumed that its derivative is 2 t cos(t2), even as t-->oo ????
You are almost right...uh, except that you are wrong!
Yes, any kid knows that the derivative of sin(wt) is wcos(wt), but NOT for infinite frequency!!!
Same for sin(t2)=sin(t*t)! You can do its derivative for any finite t, yet not for t-->oo when the so-called sine function becomes a set of infinitely dense spikes and is a differentiable as Dirichlet function!
That's why taking your "result" 2 t cos(t2)/t - sin(t2)/[t2] , which is correct for any finite t, to infinite is simple nonsense.
THAT'S WHY you must NOT use the product rule and must try to work with the entire function f(t)=sin(t2)/t, using the basic rules, either the the one that you called "yours" or "mine"
When, yet patiently and carefully and taking case of any detail, you first use yours, you MUST see that, in order to get the answer, you ignore "small" terms in h, such as h*t. Yes, this is correct for almost any t,, yet NOT as
t-->oo!
That's why the "well-established" rule is the rule that "everybody uses" !
Because it is correct, in general, or better said, for any trivial case.
That's also why, when you want to see what happens at any special point t=to, and here of course for t-->oo, you can't just write f(t) and f(t+h), but must first go to lim(t-->oo){f(t+h)} and lim(t-->oo){f(t)} and only then, after you know what this gives, you can go on to the limit by h.
I am pretty sure that i am writing only to myself, but what can I do?
I already wrote that I will never grow up. :-)
Nevertheless, although it changes what we all had been used to believe, my stuff is published, so there are surprises and there could be some who are really interested to learn.
Yes with Best regards,
Itzhak
(To help people spell my name, I "reveal" that I am Chinese and my name is Itz Hak. Besides, if Itzhak is good enough for Itzhak Perlman, is perfectly good for me, too)
To my sentence:> step 1. calculating f'(t)={limh->0 [f(t+h) - f(t)]/h} at arbitrary given t: by the chain rule we have
f'(t) = {.....}= 2 t cos(t2)/t - sin(t2)/[t2] for EVERY t not equal 0
Itzhak Barkana replied: "Step 1 contains all the nonsense!
How did you get the result of step 1? Did you use the product (or the ratio, if you feel better) rule? Did you differentiate sin(t2) and assumed that its derivative is 2 t cos(t2), even as t-->oo ????"
It is wrong reaction, since
1: I didn't claim that this formula is valid for t=infinity
2: the statement "its derivative is 2 t cos(t2), even as t-->oo" is not defined; the only question is it it possesses limit as t->oo; to this the correct answer is ""the limit does not exit", what can be performed in the way exhibited at step 2.
Real nonsense is to use notions out of their accepted meaning. Let me repeat once more: The definitions are not build by having right; they are build by choice of the community using them.
Itzhak Barkana replied: "Yes, any kid knows that the derivative of sin(wt) is wcos(wt), but NOT for infinite frequency!!! "
sin(t2) is not a function with infinite frequency; within the commonly accepted structure of notions, the property could be better expressed by the words:
the instantaneous frequence of sin(t2) is an unbounded time dependent function, at instant t it equals 2t.
""but must first go to lim(t-->oo){f(t+h)} and lim(t-->oo){f(t)} "
noone can order anyone elese to perform the procedure defined by
(**) lim_{t->oo} { lim_{h->0} {[f(t+h)-f(t)]/h } }
in the reversed order, since the result wouldn't be the one ordered by the definition of the limit of the derivative; Mr I.Barkana is trying to call the community, that it must be changed the order, which would calculate something completely different, the name of which is the derivative of the limit Obviously this makes sense but it is trivial: If the limit is a number, than it is independet of h thus its derivative MUST be zero. If the limit does not exist, or it it is infinity, or it is -infinity, then the expression defining the derivative f(oo+h) - f(oo) makes no sense, simply such a derivative if the limit does not exist.
In order show some less prepared students of mathematics, the following example shows that the change of order of taking limits may leads to different result:
LET us assume, that F(x,t) = exp( -xt) for x,t>0. Then we have:
1. lim_{t->oo} { lim_{x->0} F(t,x)}= lim_{t->oo} { 1} =1
2. lim_{x->0} { lim_{t->oo} F(t,x) } = lim_{x->0} ((0) =0
I hope that this will help in understanding that it does matter which value we are going to calculate:
the limit of the derivative defined by (**) OR
the derivative of the limit defined by Mr I. Barkana by (*).
The question of this thread
If f's limit is zero at infinity, does that imply f' has same limit at infinity?
is obviously related to the limit of the derivative.
That's enough from my side within this kindly supplied lecture for better understanding the staff by those who care.
Joachim Domsta
Joachim, let's start with the basic assumption that I am wrong!
Why do you feel the need for your frightening official tone???
You mentioned Newton???
Would you have rejected Newton and his "peculiar" new ideas that he called "calculus," only because it did not follow and might have even gone against all "commonly accepted" ideas of the time? No need to even mention that it was a "simple" physicist ideas, based on some very fragile (or even metaphysical) definitions of limit (that hadn't stopped being fragile for quite a long while, not before Weierstrass and Cauchy put some order)
First, if you say "I did not claim that it holds for t-->oo," then where is the argument? There is no argument at all.
The original question was that, if a function ends being zero does its derivative also ends being zero???
If a function is NOT constant, or before it reaches a constant value, its derivative has no reason to be zero.
I hoped you could see the math (very detailed in quite a few publications) and, like quite a few other mathematicians, can overcome the first surprise and hostility and might end agreeing with the simple fact that your (our all) common sense is not necessarily that bad and that, if the position (i.e., the function) ends staying at constant value, its velocity must end being zero.
But, if you can't, then you can't. Just stop preaching and teaching me those "common" things that I myself have been teaching for quite few good years, until I was "forced" to review and to find them wrong.
I really hope you have better things to do and to make you feel better!
@ Itzhack Barkana
Because of the content of your answers, no further merital comments by me are possible. Basically, since you are applying means unacceptable within serious scientific descussion. For instance,
--- you are introducing without definition the notion of the fact that "a function ends being zero" as a replacement for " f's limit is zero at infinity". I cannot manage your text if you don't define new notions.
--- asking me: ,,if you say "I did not claim that it holds for t-->oo," then where is the argument?,, you have substantially changed my quotation which was "I didn't claim that this formula is valid for t=infinity"; by this inaccuratenes you have generated a risk that some readers get an impression that my question is meaningless (was this done by you unintentionally, then OK, but remember: quotation is quotation!).
--- you are not commenting my readable diagnosis of your fault, that you are changing the order of taking the limits (this could be a good issue to start some mutual understanding you are asking for in your last answer); due to this you either don't understand my words (then you should confess this), or that you purposely are omitting the inconvenient for you truth (which is the same as running away from the core of the discussion).
--- . . . .
Joachim Domsta
My answer to your question " Why do you feel the need for your frightening official tone??? " is: I am happy that someone trying to fight with unacceptable means feels himself frightened, since this may stop, or at least diminish negative effects of spreading quasi-knowledge and wrong opinion about mathematics.
Dear Joachim, Dear Itzhack ,
Excuse me for telling you my opinion about your discussion.
The original question is not well written.
So one can paraphrase the question into two different forms. Each form has a different answer. No need to show them because many of the followers already did. Accordingly, both of your answers are correct up to how you understand and adopt the new question form.
Wish you the best
Dear Issam,
No need to apologize (at least from my side) for expressing your opinion.
As I tried to explain in my very first message, I would not have entered an argument between pure mathematicians on some peculiar property of some function. I was attracted to this discussion by the original question of Muhammet "If a function's limit is zero at infinity, does that imply its derivative has same limit at infinity?" because it happens to be important in the context of stability analysis of nonlinear differential equations.
Because simple common sense would tell you that the response should be positive, the example brought by Martin is widely used as a counterexample to common sense of a function which reaches a constant limit (zero in this case) as t-->oo, while its derivative keeps jumping up and down forever. Because this example creates lots of difficulties for stability analysis, I had to deal with it and I happened to deal with it (and with many other "counterexamples") for quite a while and, to my surprise first, I ended "discovering" that people take a computation that is only valid for finite t and then take the result to t-->oo. Wrong procedure.
However, it was my mistake entering an argument with Joachim. I did not read all previous exchanges and I was not aware of his style of argument, which is based on lack of reading and on picking some words that allow him to stay with his own same old.
Mathematical proofs require careful and thorough treatment and cannot be solved in some written arguments. My results now appear in quite a few publications. I could be right and I could be wrong, yet not all can be right. Even Joachim somehow "managed" to end admitting that the formula does not hold as t-->oo.
Yes, it makes no sense arguing with people who do not read things that are not their own. I tried to rest my case and I hope this is the end (unless someone has a real question or comment :-).
Dear Followers,
IB>: " Even Joachim somehow "managed" to end admitting that the formula does not hold as t-->oo. "
By this statement Mr. Izhack Barkana is referring again to my opinions without needed care for exactness, what cannot be left without correction. My statement was and remains as follows:
JD>: "I didn't claim that this formula is valid for t=infinity"
which makes a great difference, which was shown in details. For being understood more strictly: the reason is that it would be highly stupid to state that a formula like (**) is or is not valid for t=infinity, or to hold or not as t->oo. For completeness, this is the questioned formula for the limit of the derivative of f as t approaches infinity:
(**) limt->oo{limh->0 [f(t+h) - f(t)]/h}
This formula uses the passage with t to infinity, so it cannot be or be not valid neither for t->oo nor for t=oo. It is or is not valid as a definition of the notion: the derivative of f as t approaches infinity. In this case we have no free choices of behavior wrt to t. For clarity: the claim limt->oo ext(-t) =2 can be evaluated as valid on invalid, but it cannot be validated neither for t->oo nor for t=oo, since any expression limt->oo g(t) can be validate in two possible ways, only: either it possesses some number value or it does not exist. By this I woud like to stress:
EXPRESSION limt->oo g(t) IS INDEPENDENT OF THE PARAMETER t
EXPRESSION limt->oo g(t) DEPENDS ON THE FUNCTION g ONLY.
To Itzhack Barkana: Stop referring incorrectly to my statements. According to commonly accepted rule, you have obvious right to criticize them as incorrect, as inconsistent etc. but you have no right to present them changed , unless simultaneously warning the potential readers that your excerpt from my texts has been modified (e.g. for FUN).
You have also right to evaluate my way of being. This would be your evaluation! I will not oppose them if they are claimed by your signatue. But remember: my authorship is something what I will defend by all available means.
Joachim Domsta
Dear friends,
The latest response from the respected Professor Joachim Domsta puts some order here. Yes, he is right; not only I misinterpreted his response, but he is entitled to full authorship on his writings, because what he writes has nothing to do with my writing and represents a totally different world. I am sorry that Joachim ever related to my messages and I was misled to think that there can be a discussion. My messages are not addressed to him and I hope he will do me the decent favor to totally ignore my messages and to not relate to them at all.
For whoever might have any interest in the original question, I attach a Word file with some more details.
IB: Therefore, although we can make use of the product formula with f(t)=sin(t2)/t for any finite t, we cannot explicitly use the derivative of sin(t2) with any expression that takes it to t →∞ , including the product rule. Also, if we use the basic rule (1) above with the function f(t)=sin(t2)/t, it gives the same result as the product rule for finite t’s as it should, yet one ignores “small” terms such as t*h and the y cannot be ignored as t →∞
This part exactly explains the source of misunderstanding ofthe meaning of "the limit of the derivative at t →∞ ". Mr Itzhak Barkana tries to evaluate this number/quantity by simultaneous passage with h→0 AND t→∞, wheras the meaning is the following: find the derivative for EVERY t separately, and then go with t→∞ . According to this, it is obvious that Mr @Itzhak Barkana has no right to falsy accuse me for wrong answer, since I have always clearly said what is the appropriate order of taking the limits. In such situation I can only agree, that his understanding of the notion "limit of the derivative AS t→∞" and my differ, since he tries to use "the derivative AT t=∞" instead. However, my understanding uses literal meaning of the words, and therefore seemingly is more correct [it does not require any additional explanation why should we use a specific simultaneous changing of t and h].
But of course, one can define the meaning of each of the notion differently, but since this would be totally different than the common sense of the words, then it requires explicit formulation of the NEW meaning.
It may sound peculiar, but, despite the regretfully hostile tone, in retrospect I am thankful for such stubborn criticism that I also used to receive from many, in particular before things were published.
Because this also used to be my own opinion (like of all of us) before I took the pain to revisit all those “well-established” concepts, those persistent objections only show how delicate these "simple" things are and how much more detailed explanations they need and indeed, they have been carefully treated in new publications. Still, the "miracle" are those readers, including reviewers, who can really-really read a second (or maybe a third or fourth) time and then can also change an initially negative, or even openly hostile, opinion.
This only shows how routine, built on our long dealing with routine functions, makes us all end with “well-established” rules that we then apply even where they do not belong.
Take the simple F(t)=sin(t). It is very clear that its derivative is f(t)=dF(t)/dt=cos(t) and this is clearly right for any t. Therefore, one can clearly see and claim that the derivative keeps moving up and down for ever, including as time tends to infinite.
Similarly, one can see that the derivative of F(t)=sin2(t) is f(t)=2sin(t)cos(t)=sin(2t) and therefore, one is again inclined to “clearly” see that the derivative keeps moving up and down for ever, including as time tends to infinite. The only “small” problem here is that the behavior of the function f(t)= sin(2t) as time tends to infinite has nothing to do with the derivative of the function F(t)= sin2(t) as time tends to infinite, for the “trivial” reason that F(t)= sin2(t) becomes a bunch of infinitely dense spikes as time tends to infinite and is not more differentiable there than the Dirichlet function is anywhere.
But one must be careful with much “simpler” issues. Even to simply write F(3) for the specific value of a function F(t) at the particular argument value t=3 is only possible if the function is well-defined (as a simple and direct computation) at t=3 (or at any other specific value t=t0 of the argument).
Yes, for F(t)=sin(t) we can simply write F(3) for its specific value at t=3, yet only because it is a routine function and its value is well defined as a simple computation at t=3, or at any other particular value.
“Almost” similarly, for F(t)=sin(t-1)/(t-1) we can simply write F(3) or F(5) or F(t0) for almost any t=t0, yet not for t=1. In such a case, we must write lim(t-->1){F(t)} in order to get the value 1.
Usually, one can afford to save time and does not have to write lim when F(t0) is well-defined, yet if one decides to write the limit, it is never wrong, even if it is not necessarily needed.
When we talk about derivative, we generally make the limit by h, because we simply assume that F(t) and F(t+h) and their difference F(t+h) - F(t) are well defined at all particular t’s.
Otherwise, one must clarify what happens with the functions at any particular t=t0 before one can decide whether the limit by h is possible or not.
So, the procedure that I used in this particularly delicate situation it is not any change order of limits. It is only the right procedure for the special situation t-->oo, that must be clarified before one can try to perform the limit by h. So, while for sin(t2) there is no derivative for t-->oo, one realizes that the division by t in F(t)=sin(t2)/t keeps F(t) “decent” as t-->oo and so, the computation of the derivative is possible and the result of it is what our common sense should have forced us to accept: when our robot position (the function) reaches a constant value, its velocity (the derivative) cannot keep moving up and down, even if our routine use of routine formulas seems to say otherwise.
All these delicate things have been very carefully treated and re-treated and explained in large detail in my recent publications. If someone is really interested, one should at least try to read some, instead of just arguing.
And... don't forget to (try to) Have Fun!
Regards to All,
Itzhak
Muhammet Ali Okur question.
Q1. Suppose that
h1) f and f′ are continuous functions on R. h2) f has limit zero at infinity
Does that imply
(I) f' has same limit at infinity?
(I1) Under the hypotheses h1) and h2) there is no a positive constant m such that |f'| >= m
(I2) there is a sequence cn
such that cn tends infinity and f'(c_n) tends 0.
For example, in this situation f' can be ''big'' on some set A and small on some set B, where the measure of B dominated the measure of A.
But if in addition to h1) and h2) the hypothesis
h3) f' has limit at infinity
holds
then (I) holds.
First of all, it is my honor to see you relate to my answer(s).
Not a very long time ago, I would not even have dreamt to attack such “purely theoretical” questions, if not for their terrible implications concerning the real world and in particular, the analysis of differential equations and the ultimate behavior of their solutions (trajectories).
Because of the finesse of the point, I begged people who might have some interest in the issue to read at least some of the many publications that are now available, in which I took the pain to explicitly and in much detail deal with all those delicate issues.
Instead, again I see such a general “counterexample” meant to “prove” that a function can reach a constant value while its derivative can keep doing all it wants. Is this supposed to just ignore or even kill all the tremendous amount of work that proves otherwise?
So, to your (I) question, the answer is YES! Moreover, the function f does not have to reach zero. It is enough that it reaches any constant value. Moreover again, f does not to be defined as continuous, if it is defined such that it ultimately reaches a constant value.
Indeed, you can define any f and f’, yet are we sure that one is the derivative of the other?
Trying to translate your general presentation, let us assume that f’=-e-t+g(t). Here, g(t) is only a set of spikes with no area. Therefore, f(t)=integral(f(t))=e-t and, as you and many others feel entitled to claim, f(t) reaches zero, while f’(t) keeps jumping forever. As a conclusion , you are right, and my claims are plain nonsense.
Or... is it so? If you integrate what you called f’ and see that its peculiar part g(t) has no effect on the integral function f(t), how can g(t) be part of the derivative of f(t), obtained as df(t)/dt? It is NOT! The function f’(t) is the integrant for f(t), yet only the parts that actually contribute to the integral can be found in the actual derivative.
Because my message answers a personal question, the answer might be interpreted as something personal, which is not.
Therefore, I hope you can get and have a look at least over one work (of many):
Barkana: “Revisiting limits, derivatives, and the apparent need for continuity for convergence of derivatives” Mathematics in Engineering, Science and Aerospace (MESA), Vol. 8, No. 1, pp. 29-41, 2017
Finally, we can agree or agree to disagree, yet I can only hope that this fine point can remain the topic of a scientific discussion and will not serve as the motive of new hostile arguments
With best regards,
Itzhak
@Itzhak Barkana
IB "When we talk about derivative, we generally make the limit by h, because we simply assume that F(t) and F(t+h) and their difference F(t+h) - F(t) are well defined at all particular t’s."
As you see, you are tryinh to define the derivative by taking the limit wrt h SIMULTANEOUSLY for ALL t's with t=infinity. This is not the ususally used definition of the derivative of the function, see e.g.
https://www.mathsisfun.com/calculus/derivatives-introduction.html
"The derivative as a function
... Let f be a function that has a derivative at every point in its domain. We can then define a function that maps every point x {\displaystyle x} 📷 to the value of the derivative of f {\displaystyle f} 📷 at x {\displaystyle x} 📷. This function is written f′ and is called the derivative function or the derivative of f. "
Once I (and many other followers) are declaring the usage of the above definition of the derivative of the function, the answer MUST be as given many times before: There are functions approaching 0 as t approaches infinity with derivative NOT satisfying this requirement.
On the other hand you are pressing aplliers of mathematics to use your definition that the derivative equals the simultaneus limit of the differential ration as h goes to zero for all t with infinity (wrongly) included, which - besides - is NOT in the domain. THE ONE USED BY YOU IS ANOTHER DEFINITION AND IT MIGHT BE CONSIDERED AS A NEW NOTION WITH NEW PROPERTIES ETC. ETC. (CONFIRMED BY ME AT LEAST TWICE BEFORE)
But I strongly advice you not to write stupidities that anybody wishes to have the derivative of sin(t2)/t existing at all t with t=infinity. OBVIOUSLY, nobody would oppose, that it does not possess derivative at t=infinity. However, NOBODY HAS CLAIMED THIS EMPTY STATEMENT. The acceptable statements says:
There is no limit of the derivative function of sin(t2)/t as t APPROACHES infinity.
And again you are ORDERING all others to accept that saying "the value of the limit of the derivative as t approaches infinity" they HAVE to understand this your way "the value of the derivtive at t= infinity". Even worse, you have argued at least ones, that stating "the value of the limit of the derivative as t approaches infinity" I have said "the value of the derivtive at t= infinity", which was an ordinary lie!
Let me repeat: You can order other mathematico-physisicists-biologists-engeeneers whatever you wish, but you cannot make them follow you. You can use notions named by the same word but with different notions, you are FREE to do this things infinitely many times. But then you have no right to state that other people HAVE to use you system of notions and/or to accuse them for being wrong when they use consistently the old meaning of the notions and derive properly suitable conclusions.
And take into account please, that spreading new meaning of old notions WITHOUT WARNING the readers about these changes is a scientific crime. Especially if jointly with personal accusation of some people, that they are saying false theorems.
And do not play a person the only one having right to decide what are proper words for naming different notions if they are named already. But even strongest authorities do not oppose other people using own vocabulary, provided the notions are defined according to logical rules forming a consistent system of notion, something like local variables in codes with procedures. THIS IS NORMAL to change meaning of some words for local needs. ABNORMAL is to say - what you are trying to do - that other reasearchers use inproperly some names, without any consent that
1 - they use them according to the widely accepted system of notions
2 - they even accept your right to use new notion under old names
3 - they even accept some errors in the explanation of the new meanings
For you it's not enough. You need acceptance that your vocabulary is the only possible and correct. NO SIR, no acceptance for such expectations when the vocabulary is set hundreds years ago!
Next time trying to explain your point of view you should start with honest confession, e.g. like this:
"I am using the following NEW definition of the limit of the derivative of function - which I know is different that the ususal - but seemingly might be interesting for others: . . . ..
And then you can continue:
Acoording to this definition, I claim that, . . .
Proof: . . . "
Otherwise you will never find acceptance as a serious participant of scientific discussion.
No further remarks.
Yes, it wa a stupidity from my part to think that one could have a "scientific" discusison or argument with Joachim Domsta. This was only because I did not read enough of his previous "scientific" arguments and I was not aware of his plain hostility or even hatred that he addresses opinions that dare to be different from his own.
If at least he were be able to argue with my actual arguments and not distort my arguments!
Still, things, in particular new and different things, ultimately get published because there are people who are still able to read again and again and finally can learn to accept new things and even correct what they got used to take for granted.
I have no need for JD's approval and have no intention to respond to this mad hatred.
For those interested in the subject of the last discussion, let me present the true theorem, which is wrongly or at least in highly unclear way opposed by some disputants;
THEOREM. There are functions approaching 0 as its argument approaches infinity with derivative not satisfying this requirement.
Proof: As an example one can take f(t)= sin(t2)/t, defined for t>0, which obviously approaches 0 as t approaches infinity. Its derivative equals f'(t)= 2 cos(t2) - sin(t2)/t2 for t>0. Thus,
for sequence tn= \sqrt{2 n \pi}, the sequence of values f'(tn) has limit equal 2;
for sequence sn= \sqrt{(1+ 2n)\pi}, the sequence of values f'(sn) has limit -2.
Moreover, both tn and sn approach infinity. Therefore, according to the Heine definition of the limit of a function as t approaches infinity, the derivative f' does not possess limit as t approaches infinity. q.e.d.
If it weren't sad, could sound funny reading the claims that a "simple engineer" like me can't know Math! :-)
Moreover, we all are supposed to keep believing that a curve can reach a constant value, yet its derivative can keep moving up and down forever.
If at least the Great Mathematician could stop distoirting my arguments, could still read and could at least be able to accept the fact that sin(2t), the simple and nice derivative of sin(t^2), simply stops being the derivative as t-->oo!
Then, maybe, just maybe, one could also understand that making explicit use of it to get f'(t)=2 cos(t2) - sin(t2)/t2 as the derivative of f(t)=sin(t^2)/t makes this result also stop being the "derivative" as t-->oo!
Then, whether the derivative of f(t)=sin(t^2)/t as t-->oo exists or not and, in case it exists, what it could be, is a much more delicate step and require "some" work, and also lots of hesitations, having to deal in detail with all those delicate and "well-established" points, before even thinking of publishing anything.
All the Best,
Itzhak
My final advice for Izhack Barkana:
Please consider the possiblity to distinguish between the two notions:
1. the value at infinity
2. the limit as the argument goes to infinity
All even Better, Joachim
I am very sorry I ever wrote here, to people who refuse to read and only keep coming with their "counterexamples." Yes, those are smart and tricky and can easily convince. The fact that they are wrong is irrelevant (here).
No, Sir, your example is nothing new and has nothing to do with the argument or with the original question here. Reaching a given value at a given point is not reaching a constant value. Reaching a constant value means that it reaches the value to stay there for all argument values thereafter.
Infinite is not a point, it happens to be infinitely long.
But again, this is irrelevant. Sorry to have disturbed you again.
@Wulf Rehder,
I hesitated quite a bit before adding anything new here, as I never dreamed that I would be getting so much for trying to answer a question!
But… I wish you would read! How can I make you read?
I never ever tried to “win” over someone else and, in particular at this time and age, I am not starting any career. You could have seen that I only tried to add my response to a question that, as it so happened, it has preoccupied me for quite a while.
More Important, it is not me, but it was JB’s "response" to my answer which, instead of really relating to the point, started dividing people into “good” and “bad,” according to their profession. I would never do this, and definitely not because of their opinion on some theoretical issue. When I joined here, it was only to add my humble opinion on an issue, assuming that people on RG who have interest to read would read (and pretty many do read) and those who don’t would just ignore everything.
If you had read, maybe you could have realized that it was the respected Professor JB who found it right to attack people, not problems. And this was long-long before I was even thinking to join this discussion.
Sorry, yet after more than 40 years of using and teaching high level Math (I wonder if you were born then :-), it was a bit hard to read such things as: “as an engineer. you don’t know…” or that “you can order other mathematico-physicists-biologists-engineers,“ but not a mathematician like JB, mind you.
As it happened, long before the question was asked here, I was forced to relate to the issue, not for the sake of some theoretical argument, yet rather because of its relation to differential equations, and this in turn because of the effect of differential equations behavior on stability of real-world nonlinear systems.
Yes, in some less then trivial situations, my analysis changes what all of us have been used to accept on derivatives (yes, yes, including myself) and so, I was not surprised to get such negative responses, and not only from pure mathematicians. However, quite a bit of readers, in particular mathematicians, were then also able to really read, and maybe re-read, and really digest my new analysis (not only what they already had in their own minds) and my new conclusions.
Moreover, your example does not help JB. Contrary to your example, where the function jumps to what you call a "constant," the example f(t)=sin(t2)/t that JB is using is at least legitimate (as maybe JB could tell you), because its formula tells us that the function indeed tends to zero as time tends to infinite. The problem is that it is used to “prove” that “the limit of a function can reach a constant value (that here happens to be zero) yet its derivative can keep moving up-and-down.”
Now, whether its “nice” derivative formula, which is correct for finite t's,, is or isn’t its derivative as time tends to infinite is another issue.
Here JB and I differ, and it seems that JB speaks another language (and you might have felt the need to add some Old Greek to it :-).
As “poor people” like me understand the use of limit, we write lim(t-->3)(f(t) when simply writing f(t=3) or just f(3) is not clear. For a function such as f(t)=sin(t)/t, you can simply write f(t=1) or f(t=5) or f(t=to) for “almost” any other value t=to, yet not f(t=0). Therefore, you write lim(t-->0){f(t)} and, as the limit requires, you first assume that the variable t is “very small, arbitrarily small, yet finite” and then, after performing what operations we can (or after using l’Hopital) to eliminate the uncertainty, at the end we finally do replace the value t=0 for the argument, to find the value f(0)=1 of the function f(t)=sin(t)/t at t=0.
If we define y(x)=x2 and want to compute the ratio y/x, eithe for x=0 or for x=oo, we get the uncertainties 0/0 or oo/oo.
Therefore, to perform the limit for x=0, we first assume that “x is arbitrarily small yet finite.” This allows us to write lim(x—>0)(x2/x)=x and, after we eliminated the uncertainty, we can simply substitute x=0 to get the result that ultimately can be simply written as f(t=0)=0.
Similarly for t-->oo, we write lim(t—>oo)(x2/x)=x, where now “x is arbitrarily large yet finite,” and, after we eliminated the uncertainty, finally substitute x=oo to get the value f(t=oo)=oo.
The fact that JB differentiates between the value for t-->oo and for t=oo is beyond the understanding of simple Earthlings like me.
Maybe, as an expert in Old Greek, you may help us here. :-)
I wonder.
All the Best to All,
Itzhak
@ Wulf Rehder,
Please, specify in your published paper some e-mail address. This will help us to contact you for getting access to your nice papers, which now are practically unavailable (at least for me). I am very much interested in those on applied mathematics in particular to quantum mechanics and structure of matter like:
Quantum probability zero-one law for sequential terminal events
The Asymptotic Distribution of Random Molecules
Spectral properties of products of projections in quantum probability theory
Best regards and a wish of quick return to those who care and are very much interested in your contribution to RG.
Joachim Domsta
Dear Wulf,
Could you please provide some address to contact you.
This is important to reach your articles and your books.
Also, I need your advice on some topics related to your specialization.
Thank you in advance
Dear All,
among answers by Itzhak Barkana the following proposal of extending the notion of derivative can be derived:
DEFINITION. A real valued function f defined on an intenval (a, ∞) is said to be regularly differentiable at ∞ if and only if
(a) for every h the limit Δf∞ (h) := \limx→∞ [f(x+h) - f(x)] exists and is finite;
(b) the limit Df∞ := limh→0 [Δf∞(h) / h] exists and is finite.
Then Df∞ is called regular derivative of f at infinity.
If one accepts this notion, the following becomes true:
PROPOSITION. If f is measurable and its limit f∞:=limx→∞f(x) is finite, then it is regularly differentiable at infinity, with Df∞ = 0.
COMMENT. This is an additional example that under additional conditions the answer to the curent question of this thread can be POSITIVE.Despite this, in the general case, as the examples provided e.g. by Miodrag Mateljević @K. Kassner, @Nacima Memić, @Martin Křepela, the answer to the general question IS NEGATIVE
The proof is omitted here. Let me just point out that (a) is known as the additively regular variability, which is equivalent to state that exp( f( ln (y) ) ) is regularly varying in the sense of Karamata, as y tends to infinity. Moreover, (b) becomes superfluos. For generalities on RV property the reader is referred to monographs or articles, e.g.
E. Seneta: Regularly Varying Functions, Lect.Notes Mathematics, Vol 508 (1976). available at
http://akmotorworx.co.uk/153941/regularly-varying-functions-by-e-seneta.pdf
N. H. Bingham, C.M. Goldie, and J.L. Teugels: Regular Variation, Cambridge Univ. Press, Cambridge, 1987.
R. Bojani'c, and E. Seneta: Slowly Varying Functions and Asymptotic Relations,J. Math. Anal. Appl., 34(1971), 302-315.
Milan R. Taskovi'c, Fundamental Facts on Translational O-Regularly Varying Functions, Mathematica Moravica, Vol. 7(2003), 107–152. available at
http://www.moravica.ftn.kg.ac.rs/Vol_7/13-Taskovic.pdf
Best regards, Joachim Domsta
\begin{JoaD20190508.2}
BF: the condition of f being measurable is in the case at hand equivalent to it being of bounded variation.
Not true.
EXAMPLE.
Function f(x)=sin x / x, x>0, possesses the following properties:
P1. f is measurable (since continuous);
P2. f is additively regularly varying at infinity (since its limit eqals zero).
Despite this,
P3. f is of unbounded variation (for instance, the sum of absolute values of increments f(xn) between consecutive points
xn := π (n + 1/2), n=0, 1, 2, ...
equals the sum of the all numbers
|f(xn) - f(xn+1)| = 8 (n+1) / [ π (2n+1)((2n+3) ], for n=0,, 1, 2, 3 ,... ,
which is infinity).
\end{JoaD20190508.2}
\begin{JoaD20190508.3}
Even the measurability is superflous in the above Proposition, since the following is true, as well:
PPROPOSITION. If the limit f∞:=limx→∞f(x) of a real valued function f defined on (a,∞) exists and is finite, then it is regularly differentiable at infinity, with Df∞ = 0.
Proof: Under the assumptions, we have
Δf∞ (h) := \limx→∞ [f(x+h) - f(x)] = f∞ - f∞ =0
Hence, D∞:= limh→0 Δf∞ (h)/h = 0. q.e.d..
REMARK. The measurability of f is used in the following proposition showing that the condition (b) is superfluous when (a) holds.
PROPOSITION. Every measurable additively regularly varying at infinity function f defined on (a,∞) is regularly differentiable at infinity.
Indeed, from measurability and the properties of RV functions, the multiplicative counterpart of f given by
F(y) = \exp( f ( ln(y) ) ), for exp(a) y < ∞ ,
for every h and every x for some real exponent of the regularity -∞ < a < +∞ we have
limx→∞F( eh ex) / F(ex) = [ eh ]a = eha , for every h.
"Translated" to f it is equivalent to
limx→∞ [f ( x+h) - f(x) ]= h a , for every h,
which in turn implies that
D∞:= limh→0 Δf∞ (h)/h = a.
q.e.d.
\end{JoaD20190508.3}
Dear All,
Considering the bad tone of previous arguments, I already regretted and even apologized for ever mixing in.
Then, RG surprised me by announcing me that my name was again mentioned in relation to the new answer of Joachim Domsta to the same question.
Now, I will leave the new argument among Mathematicians about this or other topic and go back to the original question. As the questions related to functions that end with a zero limit, it implies that the limit exists.
Also, if the function f(x) tends to be zero as x tends to infinite, a small h does not change much, so f(x+h) also tends to zero and so is their difference.
This makes the numerator of (f(x+h)-f(x))/h to end being zero and so is the limit by h, namely, the derivative.
Now, you can again check the assumed “counterexample” f(x)= sin(x2)/x and see that it stops being counterexample and, instead, it only confirms that the answer to the original question is YES.
(I hope you) Have Fun! :-)
Itzhak
To BF:
Whatever you have written before, the claim:
the condition of f being measurable is in the case at hand equivalent to it being of bounded variation.
is not true. That is all I wanted to remark in my last respond to you.
- - - - - - - - - - - - - - - - - - - - -
Moreover, returning to your last answer,
"boundedness of variation was stated only as a sufficient condition."
I can agree that you have stated this.
NOTE HOWEVER, PLEASE, THAT:
boundedness of the variation of a differentiable function f defined on (a,\infty) is NOT sufficient for the implication:
If limit of f at infinity equals 0 then the limit of its derivative f' at infinity is also zero.
An example (which must be pretty tedious for presenting briefly in this thread) is in preparation.
Dear Itzhak,
I obviously accept your honest apology and am answering that my answer were not always sufficiently polite. Accept please my apology which is equally honest as yours.
With respect to your question:
The function sin(x^2)/x is (as it was!) an argument against the statement: if the limit of f at infinity is zero, then the limit of the derivative must be zero too. So the final answer to the question of this thread is NO.
However in no case the function contradicts the proposision derived by me according to your tips of understanding differently then usually the notion of the regular derivative at infinity. WHY?
Answer: because sin(x^2)/x is not REGULARLY differentiable at infinity. Note, that the proposition is CONFINED to functions regularly differentiable at infinity, only and the function does not take part in such considerations.
Best egards, Joachim
\begin{JoaD20190508.6}
EXAMPLE.
Function f(x) defined on [0, \infty) given by the formula
f(x) := max{0, 1- 2n |x- (n-1/2)| whenever |x- (n-1/2)| is less or equal 0, n=1,2,...
is obviously a probability density function.
Its reliability function
R(t) ;= \itegral_over[t, \infty) f(x) dx
possesses the folloollowing properties:
P0. R is decreasing from R(0)=1 to limR(t) = 0 as t tends to infinity, which means it is of bounded variation
P1. R is continuously differentiable for t greater or equal 0.
P2. R'(n - 1/2) =1 for every n=1,2,3,...
CONCLUSION.
Boundednes of the variation of a differentiable function approaching zero as t tends to infinity is NOT sufficient for its derivative to approach zero as t tends to infinity.
\end{JoaD20190508.6}
\begin{JoaD20190508.7}
To BF
I am not discussing what you was having on thoughts, when you were saying this or this, etc. Please, refer rigorously to my claims, or/and present your claims also rigorously. One of the bad steps I made was to try to understand what you meant under "in the case at hand". I apologize for thi if I didn't present your thought correctly. Will never happen any more time, promise.
Despite this the merital question remains:
What the boundedness of the variation is sufficient for? Does it require additional conditions to be fulfilled for your claim? etc.
\end{JoaD20190508.7}
To BF
Which of your propositions are implied from my proposition, please?
What do you mean under regular function?
I was talking about regularly varying functions which are those for which f(ax)/f(x) approaches finite limit for every a, s x aproaches infinity.
\begin{JoaD20190509.1}
EXAMPLE.
We assume that the probability density function of the gamma distribution with exponent p>0 and coefficient b>0 be given by f(x) = b^p x^{p-1} exp{-bx) / Gamma(p), for x >0.
Its mean and variance are equal p/b and p/b^2, respectively.
According to this and other properties of the function, one can prove that the value f(p/b) is not less than b/\sqrt{p} (up to a universal positive constant factor C), whenever p and b are not less than 1.
Let function f_n(x) be the gamma probability density function with exponent p_n=n^6+1 and coefficient b_n= n^5.
Up to a constant factor, the value f(n^6+1/n^5) is not less than n^5/\sqrt{n^6+1}
The function g_n(x) := f_n(x) /[n(n+1)] at points close to n assumes value asymptotically greater than n^5/[n^3 n^2] =1 (up to the universal positive constant C). Anyway, liminf_{n->oo} g_n(n) >C >0.
Let us denote by g(x) := \sum_{n=1}^{\infty} g_n(x), for x>0, and
G(t) := \integral_[t,\infty) g(x) dx, for t>0.
PROPOSITION.
Function g is a probability density function, since sum of all coefficients 1/[n(n+1)] sum up to 1. Therefore function G given above possesses the following property
P0. G is strictly decreasing from G(0)=1 to limG(t) = 0 as t tends to infinity, which means that G is of bounded variation.
Moreover, by the preceding analysis one gets:
P1. G(t) is analytic at every point t>0, in particular its derivative G'(t) = - g(t) is continuous and negative at every point t>0.
P2. -G'(n) = g(n) > C for every n=1,2,3,... where the constant C is positive.
CONCLUSION.
Boundednes of the variation of an analytic function approaching zero as t tends to infinity is NOT sufficient for its derivative to approach zero as t tends to infinity.
\end{JoaD20190509.1}
\begin{JoaD20190509.2}
PS. Obviously, the last example cannot beat in simplicity the series of examples given by K.Kassner available above at
https://www.researchgate.net/profile/K_Kassner/post/If_a_functions_limit_is_zero_at_infinity_does_that_imply_its_derivative_has_same_limit_at_infinity/attachment/59d6419079197b807799d857/AS%3A435307989475328%401480796920158/download/function_der_limit_infty.pdf
The reason to put the probabilistic one was to illustrate the same phenomenon also for functions which could be met in the theory of reliability and its relatives like queueing theory, risk theory in finances etc.
\end{JoaD20190509.2}
Aristotle, one of the greatest minds, if not the greatest, in the history of mankind, teaches us why the moon is larger at the horizon than in the middle of the sky. Reason being maybe Aristotle's main contribution (replacing the belief, which was the main thing before), his explanation is based on the very best reasoning and so, his explanation is clear and very convincing.
Doesn't matter that it is simply wrong, and that the real explanation is that, along with reasoning, Aristotle never felt the need to maybe also stick a finger out and just measure that the moon is the same in both positions.
What I wrote above does not affect my respect and admiration for Aristotle and for his contribution to mankind. It just shows how strong a Paradigm can be and how much stronger one must fight (with himself) to get rid of it.
Why do I remind the great mind Aristotle? Because, after lots of arguments, Joachim Domsta finally ended writing
DEFINITION. A real valued function f defined on an interval (a, ∞) is said to be regularly differentiable at ∞ if and only if
(a) for every h the limit Δf∞ (h) := \limx→∞ [f(x+h) - f(x)] exists and is finite;
(b) the limit Df∞ := limh→0 [Δf∞(h) / h] exists and is finite.
Then Df∞ is called regular derivative of f at infinity.
If one accepts this notion, the following becomes true:
PROPOSITION. If f is measurable and its limit f∞:=limx→∞f(x) is finite, then it is regularly differentiable at infinity, with Df∞ = 0.
All people who are interested in the response to the question, should all be grateful to Joachim Domsta for writing the above. The only issue here is that this is presented like a “new” definition and so, people are not aware that this so-called regular derivative of f at infinity is the only way to obtain the derivative of a function, not only at infinity, but at any other delicate value of the argument.
To discuss this, let us go back to he customary derivative formula
f’(x)=df(x)/dx=limx→∞{[f(x+h)-f(x)]/h}
of f(x) and recall that derivative is a local procedure which, at any particular x=x0, might or might not exist and so, one must make sure that both f(x=x0) and f(x=x0+h) are well defined at that particular x=x0, where h still is arbitrarily small yet finite before ultimately letting it go to zero.
As we got mainly used to deal with routine functions, we also got used to get the same formula for any x, and so, we got used to automatically use the customary result of the derivative, d(x2)/dx=2x, d(sin x)/dt=cos x, etc., and let the resulting formula tell us what the limit is or if if limit exists at all.
However, even before talking about the more delicate issue of infinity, let us look at f(x)=sin(x)/x. While we still can use the regular derivative formula
f’(x)=df(x)/dx=limh→0{[f(x+h)-f(x)]/h}
it is only for x values that are different from 0, where both f(t+h) and f(x) are well defined.
Instead, for the derivative at 0 we would first have to see what limx→0[f(x+h)] and limx→0[f(x)] and of their difference give.
You can plot f(x)=sin(x)/x or just write "Plot of f(x)=sin(x)/x" on google and then look at the plot and think what the slope at x=0 should be.
Then, first see what sort of nonsense the “customary” formula (which in this case is f’(x)= [xcos(x)-sinx]/x^2 ) gives at x=0 and only then check what we get if we first (and rightly so) compute the correct values that limx→0[f(x+h)] and limx→0[f(x)] give BEFORE proceeding to the limit by h..
If now one uses the differentiation formula for f(x)=sin(x2)/x, one finds that for finite x (and again except x=0), the result is a function that indeed keeps moving up-and-down forever. HOWEVER, the customary PROCEDURE that was used for derivative simply does not hold as x tends to infinity. Therefore, like for x=0 above, what we might have called the derivative simply is not derivative any more as x tends to infinity and so, here one has to first see what happens with f(x) and f(x=x+h) as x tends to infinite.
Therefore, AFTER doing the exercise, one finds out that what Joachim Domsta calls regular derivative at infinity, is the only derivative for functions like f(x)=sin(x2)/x in the problematic situation x-->oo.
Any other “proof” that people may present here by using overly-complex functions, is a “proof” only because they take a result that is only valid at regular points and yet, they ten use it at the delicate points, although what the resulting function does at those delicate points has nothing to do with the derivative at those delicate points. Instead, at any delicate point x0, not only at infinity, we should first make sure that we can say what both f(x0) and f(x0+h) and their difference are. If we cannot say, then there is no derivative at that particular x0. Otrherwise, if we can, then we can also go on to also perform the limit by h and this result now is the derivative at that particular x0. And it does not need any other “new” denomination.
All the Best,
Itzhak
Dear Itzhak,
I like your way of explanation of a possible definition of f at the 'delicate' points. However, I strongly oppose the 'order':
" we would first have to see [ the limit with fixed h]".
This is your suggestion of the way one may proceed, but far not the unique.
Best regards, Joachim
Dear Joachim (and anyone interested),
No doubt, reading your messages now, you seem to be light-years from the way we started, so it is hard for me to ask for more, as it took me myself quite a bit to even dare to review those commonly accepted concepts.
Nonetheless, why is it so hard to agree that when you do the limit by h, you already KNOW what f(x) is and therefore, you feel freeeeee to replace x by any value after you reach what you call derivative (not normal or regular, God forbid!).
Therefore, this is NOT derivative at any x0 where f(x0) does not simply tell you what the value of f(x) is. Fortunately, that’s why Mathematicians like you gave us the limit procedure: to be able to decide the value of f(x) at any problematic x0.
OK, we’ll wait.
Itzhak
Dear Itzhak and All interested,
IB: " Nonetheless, why is it so hard to agree that when you do the limit by h, you already KNOW what f(x) "
Yes, in such cases the derivative can be called (and is called) as calculated AT THIS x; for every x separately.
Now, if one asks for checking g as the limit of functions f(x) as x-> oo, then everybody understands to check how the values of f(x) are close to the limit value g (NOT USING f(oo), since it does not exist at all).
Everybody agrees that function differentiable at every point x determines the new function f'(x), called ferivative of f, with independent variable x. This is just another function.
Following the accepted already procedure, if one asks for cheking g as the limit of function f'(x), then everybody understads to check how the values of f'(x) are close to the limit value g (NOT USING f'(oo), since it does not exist by definition).
Correspondingly, if one can states that the limit of f does not exist at infinity, by the same procedure one can state that the limit of f' does not exist at infinity.
The crucial for the question problem is solved negativly by given mamy examples for such functions f that satisfy simultaneously two conditions:
CONDITION 1: f(x) approaches 0 as x tends to infinity
CONDITION 2: the limit of f'(x) does not exist as x approaches infinity.
Among them is the one sin(x^2)/x, defined for x>0 (only finite x-s are in use)
Can anyone deny treating f'(x)= 2 sin(x^2) - sin(x^2)/x^2, as ordinary function, despite the fact that it defined for ALL positive x >0?
Regards, Joachim
\begin{JoaD20190509.5}
Behnam Farid
:I have got that your conditiono fregularity means that f is infinitely many times differentiable. Thank you.
- - - - - - - - - - - - - - - - - - - -
However, I still wonder which of your propositions are established by mine. The difficulty I have is, that your claim is about the existence of the limit of the derivative at infinity, whereas my latest propositions were about existence of the very easily satisfied condition on existence of the regular derivative at infinity, for which - as a matter of fact - it is sufficient that f possesses a finite limit at infinity.
- - - - - - - - - - - - - - - - - - - -
By the way, the last example at JoaD20190509.1 shows, that even analyticity and boundedness of the variation do not suffice for existence of the limit of the derivative at infinity.
\end{JoaD20190509.5}
Paradigm, Paradigm!
Dear Joachim, I can only hope that you will finally be able to read my message above, not some segment of it and translate it into what you already have in your mind before thinking that you are answering “my” claim.
Yes the function f'(x)= 2 cos(x2) - sin(x2)/x2 (you had a small typo) is a regular function and you can see its behavior everywhere, yet, if you will actually perform the procedure that gives it as the derivative of sin(x^2)/x, the procedure simply does NOT work for x-->oo and so, the result is NOT the derivative of sin(x^2)/x as x tends to infinite.
Because of what now seems to look like an apparently metaphysical concept of infinity, I wanted to first try to maybe avoid the discussion on infinity and so, I asked you to do same computations for x=0, but I see that, for some reason, you prefer to eliminate 0 from its domain. Why? See what the result of the above formula gives for x=0 and then what you get if you first make sure that you compute the f(0) and f(0+h) (you will have to use limits for x to do this) and what seems to you to be the right result. If you do it at zero first, then maybe it can help to go back to infinity.
Hoping for the best,
Itzhak
\begin{JoaD201905.6}
Dear Itzhak
Thank for mentioning the typo.
My opinion about the limit at infinity has been explained fully.
About zero, there are usualy two notions, since it is a finite number:
1. to calculate the derivative, provided we ADD the zero do the domain by simultaneous adding an appropriate value of the function; appropriate means such that the derivative gets a chance to exist, which means the limit value of f at zero;
2. to calculate the limit of the derivative as t->0, This is always available without changing the domain of the function means: without adding the value of the function at zero.
For f(x) := sin(x^2)/x the limit at zero is zero; thus for the first problem, we assume extended function (in advanced books denoted consequently by different leter, usually f with tilde) as follows:
f*(x) := sn(x^2)/x for x not equal zero; f*(x) := 0 if x =0.
Now we ccan start to derive the derivative at zero: of the NEW functions by calculating the limit of the devided differences as h ->0:
Df*(0) = lim [f*(0+h) - f*(0)]/h = lim [sin(h^2)/h - 0] = 0.
This is NOT the derivative of f at zero, but the derivative of its extension f* by continuity.
Note function f with ts natural domain R\{0} is not defined at zero, so it does not have derivative at zero, since function might have derivative BY DEFINITION=CONVENTION only at a point belonging to the domain.
Note also, that another extension obtaned by f**(0)=3 does not possess derivative at zero for another reason: the limit of the divided differences
Df**(0) =(?) lim [f**(0+h) - f**(0)]/h = lim [sin(h^2)/h - 3/h] does not exist.
Next part Saturday afternoon, sorry - my duties call for more time.
Best, JoaD
\end{JoaD201905.6}
Actually, for both Joachim and Benham,
As I already wrote, infinity and sin(x2)/x seem to raise too many questions, so let us go back one step to the simpler function sin(x)/x and see its behavior towards x=0. (Besides, here doesn't matter whether you define the domain as [0, oo) , (0. oo), or (-oo, oo)). I only ask you to see what the customary formula that you are used to apply for the derivative of sin(x)/x gives as the function comes towards zero.
Then, just plot sin(x)/x (or simply write “Plot sin(x)/x” on Google ) and see what value it gets as it tends to zero, and what slope it has towards zero.
Only then try to again compute the derivative of sin(x)/x towards x=0, this time using the limit (comme il faut :-) and see what this gives.
(For infinity, sin(x)/x is too simple, and this is why people felt the need to come with the more complex “counterexample” sin(x2)/x))
Only then go back to sin(x2)/x, maybe first to see what your customary formula gives at zero, and only afterwards finally move back to infinity.
Maybe this can convince you that what you got used to call the derivative holds only at those points where f(x) is simple and clear, while at other points, the limit with respect to x is the tool that you Mathematicians cleverly invented and which is the only way to give us a real idea about what the value of f(x) could be in delicate neighborhoods, including the eternally mysterious oo.
Have Fun!
Itzhak
Dear Joachim,
One more trial to explain to you things that you should teach meeee!!!
You write:
I like your way of explanation of a possible definition of f at the 'delicate' points. However, I strongly oppose the 'order':
" we would first have to see [ the limit with fixed h]".
What you "oppose' is exactly what you do first at all routine points, only you do not have to mention it, because f(x) at these points is already clear and is the same with lim from the left and with lim from the right.
BOTTOM LINE: The "customary" procedure that you (and "everybody") use to compute the derivative is ONLY VALID at finite points and it excludes both 0 and infinite. The resut is a function that does exist everywhere, yet it has NOTHING to with the actual derivative at zero or at oo. ( I wrote actual, yet I do not mean any "new" sort of derivative)
Using the way you proceedded withthe derivative, I can "prove" that a+b=c in ANY triangle. If you never tried this "proof," it is worth trying. Can teach quite a bit.
Moreover, you must always make sure you know what people mean by a given word. It is nice to receive and also good to accept a GIFT from England or US (gift = present), yet it is a totally differernt thing when you are offered a gift in Germany (gift=poison).
So, if I wrote at oo (that seems to bother you much) it is exactly because you, Mathematicians, feel that you can decide what language we are allowed to use, and so, other Mathematicians, some who write the Math Analysis books that we both use, wanted me to write at oo instead of the longer "as x tends to oo." They both are meant to mean the same thing, though.
Dear Joachim,
I don't know where exactly you stand right now, yet this is not important.
You won’t believe what pleasure a “simple Engineer” like me is able to get from this “abstract theory” stuff, including our (sometimes-heated) arguments.
I hope you can also feel some.
Also, I don’t know how to exactly let people know how important those “just theoretical and abstract” things can be for practical stuff, such as guaranteeing that your plane will arrive safely to its destination and that precise machines will indeed do their jobs quickly and precisely.
Trying this in a more mathematical language, guaranteeing that nonlinear differential equations indeed do converge when they are supposed to converge.
In just two words, THANK YOU!
All the Best,
Itzhak
Behnam Farid, I am afraid it is me, not Joachim, who must apologize for mixing your name.
As the original question is about reaching a constant limit (at infinity), I thought you were interested. For people interested, I thought that plotting first a simple function and see what it should give versus what (nonsense) the “customary” computation actually does, could be interesting
So, I do apologize to you and promise not to address you.
With Best Regards, Itzhak
@Benham Farid,
I am confused: are you or aren’t you interested?
If you are, my problem is not the constant C, but the common claim:
“Of course, a function can reach a constant (including zero constant) limit value, while its derivative can keep moving up-and-down forever.”
Maybe I would not have cared much about it, yet people are throwing at me such “counterexamples” as f(x)=sin(x2)/x, and their implications are supposed to make me believe that my proofs of stability (maybe here I should say “convergence of solutions of differential equations”) are wrong and so, that a robot can end its move at a constant position and yet, its velocity can keep moving up-and-down.
After first being impressed and pursued by such examples as the one above, I ended (to my great surprise first) understanding that' despite its customary use, the claim is simply wrong, and not because of some bad Math, but rather because of some bad use of good math.
Nevertheless, I had enough arguments here (and in many other places) and many things have been written in previous messages and so, if you don’t agree, I would not want to start any new argument.
Best,
Itzhak
Behnam Farid, as I already said, I mistakenly thought that you also jumped into the latest arguments. Otherwise, I never ever had any intention to bring you in.
(Now, with a smile: It does no mean that I agree that f(x)=sin(x2)/x does not have a derivative as x -->oo. It does and it is zero. :-)
Wiishing you All the Best,
Itzhak
If anyone is interested:
Yes, the function 2 cos(x2) - sin(x2)/x2 indeed has the behavior that Benham Farid talks about, including no limit for |x| → ∞.
However, the formula that (most) people used to get it, implicitly assumes that the derivative of the component sin(x2) exists everywhere, including for |x| → ∞.
You need more than just a second look to reach the conclusion that this is wrong.
Instead, one could just use the simple and original Newton-Leibniz derivative formula lim h →0 {[f(x+h)-f(x)]/h} for the entire function f(x) = sin(x2)/x, where the presence of 1/x makes the entire function tend to zero and, even more important, to also remain differentiable for |x| → ∞.
However, if one does take the pain to actually do the computations for
f(x) = sin(x2)/x, one also observes that the result
f'(x)=2 cos(x2) - sin(x2)/x2
that (almost) all got requires and uses some approximations and eliminations of terms that can be done only for finite x and cannot be eliminated any more
for |x| → ∞.
Then you must sit down quite a bit to understand that when you do the limit by h in
lim h →0 {[f(x+h)-f(x)]/h}, you assume that you know what both f(x+h) and f(x) mean at any particular x. Sometimes, in particular for |x| → ∞, you simply can not just write f(x), again in particular for f(∞), and so, you must make use of te limit to get lim x →∞ [f(x)] . In those cases, like our case, where this constant limit
for |x| → ∞ (be it zero or not zero) exists and is lim x →∞ [f(x)] =C, some more thought allows you to undersatnd that an "arbitrarily small h" does not affect this and so, we also get lim x →∞ [f(x+h)] =C, Then, it is not difficult to see that
lim x →∞ [f(x+h)-f(x)] =0 and so, the final result is
lim h →0 {lim x →∞[f(x+h)-f(x)]/h}=0/h=0.
Yes, yes, bad habits die hard. It took quite a while and quite a few rounds, first for myself and then for reviewers and readers, to accept the (surprising???) result above and so, I do not expect people here to accept the above; not right away, anyway.
Have Fun,
Itzhak
\begin{JoaD20190512.1}
To Itzhak Barkana
REMARKS ABOUT THE ORDER OF LIMITS FOR FUNCTIONS OF TWO VARIABLES
Issue 1. With respect to my opposition, that the the derivative of f at x_0 is calculated according to the formula lim_{h→0} [ lim_{x→x0} [f(x+h) - f(x)]/h, Itzhak Barkana has anwered: "What you "oppose' is exactly what you do first at all routine points, only you do not have to mention it."
NO, Sir! I DO know perfectly what I do when calculating the derivatine: FIRST am checking if x=x0 belongs to the domain of f.
Secondly, if no - my result is: NO DERIVATIVE AT x0 EXISTS. If yes, then I am calculating the limit of [f(x0+h)-f(x0)]/h letting h to approach 0. NO EXCEPTIONS TO THIS FORMULA ARE ACCEPTED BY THE STANDARD MATHEMATICS.Therefore also there is no derivative of f(x) = sin(x)/x neither at, neither at -infinity nor ar 0. For references see ANY text-book of mathematics, of the undergraduate or the graduate level, e.g.
http://www.math.odu.edu/~jhh/Volume-1.PDF, p.87
http://www.math.harvard.edu/~shlomo/docs/Advanced_Calculus.pdf, p. 146
The diference of results of calculating the limits with interchanged order of is well recognized by mathematics, e.g.
https://en.wikipedia.org/wiki/Interchange_of_limiting_operations
https://en.wikipedia.org/wiki/Iterated_limit#Sufficient_condition
https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
Issue 2.
This is my final word about using names of mathematical notions in mathematical texts:
In mathematics, the names used for naming notions are correct as long as used consistently with the ASSUMED meanings, which in turn are determined according to a system of consistetly formulated definitions.
Everyone who wants to communicate mathematical properties has to use the notions as they are accepted by the community, unless he/she is going to use them with CONSISTENTLY REFORMULATED definition. In particular it is extremely inapproprite to claim that a notion with well established meaning does not possess its well known properties by referring to wrongly chosen name.
The above rules obviously do not cancel the freedom to contest lingual reason for choosing the names, or to introduce new names for the known mathematical objects.
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
Issue 3.
Since your very last comments show, that you are still opposing to use the names as they are established by the community of mathematicians for more than 150 years, I do not see any reason to answer your "excursions" toward analysis whether the limit of f at infinity should change or should not change its name into "customary limit of the derivative at infinity" or a "common sense limit of the derivative at infinity" or any other). Agreeing with such changes without explicit public correction in estimated journals or books would be to brake basic rules of communicating mathematics. I do not accept this way, since I DO NOT SEE ANY REASON TO CHANGE THE RANGE OF THE NAME OF "LIMIT OF f AT INFINITY; the more I cannot take part in such a procedure. Moreover, in order to preserve the possibility of communicating mathematics without doubt that the people know what they say among themselves (including me), I am opposing wrong usage of the well established notions, in particular those which are readably formulated in text-books. And this has nothing to do with stopping evolution of new mathematics; this is stopping evolution of CHAOS BY INTRODUCING NEW NAMES OR NEW RANGES OF MEANING OF EXISTING NAMES WITHOUT RIGOROUSLY DETERMINED CONDITIONS, CONSISTENTLY WITH THE WHOLE VOCABULARY.
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
Issue 4.
In view of the above your statement
IB: " Yes, yes, bad habits die hard. "
which WRONGLY suggests that definitions are just a subject of habits, makes very bad press with respect to the NEED OF KEEPING NOTION STABLE due to their role in keeping the knowledge comprehended.
Best regards, Joachim
\end{JoaD20190512.1}
\begin{JoaD20190512.2}
In order to defense my mathematical reputation, which was tried to be destroyed by Behnam Farid
, I am stating the following merital remarks:STATEMENT 1. The following proposition by BF is false:
BF: I proposed that if f(x) is regular for x > x0 (in the sense I described previously), for x0 some finite real number, and that if f(x) is of bounded variation, then f'(x) is also vanishing in the limit x → ∞.
which is based on the explanation:
BF: what I mean by regularity of f(x) for x > x0, with x0 some finite constant, is that f(x) is an arbitrary number of times differentiable over the open interval (x0,∞).
Indeed, the examples of functions f given by @K. Kassner available at
https://www.researchgate.net/profile/K_Kassner/post/If_a_functions_limit_is_zero_at_infinity_does_that_imply_its_derivative_has_same_limit_at_infinity/attachment/59d6419079197b807799d857/AS%3A435307989475328%401480796920158/download/function_der_limit_infty.pdf
(see the second distinguished answer following directly the question of this thread) and the one given above within my answer JoaD20190509.1 possess the following properties
P0. the limit of f at infinity equals 0
P1. f is analytic in (0,∞), at least (thus regular in the sense assumed by BH)
P2. f is of bounded variation in (0,∞),
P4. the limit of derivative f' of f at infinity does not exist.
STATEMENT 2.
The following proposition formulated by me in order to confirm rigorously validity of the suggestions by Itzhak Barkana is trivialy true:
"Propposition. If f is measurable and its limit f∞:=limx→∞f(x) is finite, then it is regularly differentiable at infinity, with Df∞ = 0."
where the derivative was explicitely defined as follows:
Df∞ ≔ limh→0 [ limx→∞ [f(x+h) - f(x)]/h.
This theorem was evaluated by Behnam Farid
by words:BF: "at last after some years you have come to the same result as me"
His "proof" that this implication holds was presented as follows:
BF: Since a function of bounded variation is Lebesgue measurable, you see that the set of functions I have considered satisfy the conditions in your proposition. Therefore, the correctness of your proposition implies the correctness of mine.
Since the claims of the compared propositions are different (limit at infinity and regular limit at infinity are completely different notions), hence the main error of this "proof" is based on this false statement:
independently of the claims, if assumptions of theorem 1 imply assumption of theorem 2 then theorem 1 implies theorem 2.
CONCLUSION. The statement by BH
"at last after some years you have come to the same result as me"
is harmful because it is false [simply: it states that the correct Proposition formulated by me (as it is recalled within Statement 2), implies his incorrect proposition (as it is recalled in Statement 1)].
Joachim Domsta
\end{JoaD20190512.2}