Simple Concept of convergence: There are too many kinds of convergences, that are based on particular subject and circumstances. Here, I would like to explain it in street words....you can make further deduction for every other case.
Example) Consider you have a non linear function to solve with some given domain values. Suppose the exact solution of this function have numerical value = 100, but it is hard to find this value directly or may need very costy computations.
Consequently, you perform some approximations, for example: militarization, discretization, dimension reduction, model reduction, wavelet transforms, similarity transforms, SVD, PCA etc., and obtain an approximate solution?
Now if the answer numerical is very near to 100 or exactly 100 (which is not possible), we say the approximate solution converges to the exact solution (high accuracy). If the approximate answer value is far from exact solution and e.g., your iterative solution terminates its answer on some far value e.g., 60 to 70., here we may say weak convergence (poor accuracy)....
Consequently, the nearest answer value by some approximation over the given domain values is strong convergence.
Typically, there are four main types of convergence:
1.Convergence in distribution,
2.Convergence in probability,
3.Convergence in mean,
4.Almost sure convergence.
These are all different kinds of convergence. A sequence might converge in one sense but not another.
1.Convergence in distribution is in some sense the weakest type of convergence. All it says is that the Cumulative Distribution Function (CDF) of Xn's converges to the CDF of X as n goes to infinity. It does not require any dependence between the Xn's and X.
2. Convergence in probability is stronger than convergence in distribution. For a sequence X1, X2, X3, ⋯⋯ to converge to a random variable X, we must have that P(|Xn−X|≥ϵ)-> 0 as n→∞ for any ϵ>0.
The most famous example of convergence in probability is the weak law of large numbers (WLLN). The WLLN states that if X1,, X2, X3, ⋯⋯ are identically distributed random variables with mean =μ∞.
3. Let r≥1 be a fixed number. A sequence of random variables X1, X2, X3, ⋯⋯ converges in the r-th mean to a random variable X if
limE(|Xn−X|r)=0, n->∞
If r=2, it is called the mean-square convergence. Sometimes, the case r=1 is referred as convergence in mean.
4.Almost sure convergence.
The strong law of large numbers (SLLN)
Let X1, X2, X3,...,Xn be identically distributed random variables with a finite expected value EXi=μ
In statistics we speak of an estimate or an estimator, the latter is often formulated as an average. The quality of the estimators is expressed by their convergence, their bias, their efficiency and their robustness. The estimator is convergent if it converges in probability towards its true value. It is interpreted as the fact that the probability of deviating from the value to be estimated by more than the error tends to 0 when the sample size increases. There is also a stronger type of convergence, almost safe convergence.
For more details and information about this subject, I suggest you to see links and attached file on topic.