What is the main implication of standard error of mean in statistically analyzed results. I wish to know its importance and it could be used to determine the validity of data set.
Some people consider it to represent the 'precision' of an estimate of some parameter (in this case, the mean). You could also think of it as representing how much that estimate would vary if you kept collecting samples of that same size. It also has a specific meaning to some parametric statistics (e.g., it's the denominator of a t statistic; an effect is often considered significant if it's about twice as big as the SEM).
Anyway to be honest I find standard error not particularly useful in of itself. The meaning of these parametric descriptives like SD and SE is not very intuitive (compared to things like an IQR which are very easy to understand), and just knowing how big the SE is doesn't tell you if it's big because of small variance or because of a big sample size. If course I still "use" SE as part of the input to some statistical tests (like a t-test) but I never care particularly about looking at (or plotting) it for its own sake.
The standard error is a measure of the curvature of the (log) likelihood function at its maximum. It can serve as an estimate of the standard deviation of the statistic from "similar experiments".
@Chalamala: the first source you cited (biochemia-medica.com) is quite wrong.