When i reduced the dataset of two values, correlation value goes 0.2, and when i increased the dataset, correlation value increased to 0.8, i get confused why this occur, kindly explain.
Asume you start with a data set with low variance and low correlation. If you then introduce two outliers at both ends of the set you also introduce a lot of variance and since this variance is due to the new data points it can be explained by the data, thus your ratio of variance that can be explained by the data over total variance is getting higher. And this ratio is what is reported as R^2. To avoid this effect you should use spearman's correlation instead of pearson. You might also observe the effect you've seen for small data sets, unfortunately even spearman is not helping you then.
No, the increase of correlation after adding additional data is an artefact that you want to avoid. So using a correlation measure like spearman's rho helps you to avoid that artifical increase.
Additionally, to get a better feeling of the robustness of your correlation measure in respect to your specific data set, it might help to bootstrap your data set and get a distribution of correlation measures. Ideally this distribution should be narrow. But this is unlikely to happen for your data set, since you've already seen two values (0.2 and 0.8) that span quite a range.