Actually, I am improving one of my algorithm which has very low error rate (still not evaluated) in identifying (locating) linear fit . I mainly focus on most popular methods like Least Square Method (LSM), sigma filter and some angle based methods. All the considered methods are highly effected by outliers. That is why I put my question very openly.
As rightly pointed out by Peter, terms such as "best", "better", and "error rate" do not mean much. Different algorithms will yield different answers. The appropriate algorithm is not decided simply based on what you want to do. Indeed, you need first to take into consideration the type of data as well as the source and nature of the error in these data.
Moreover, "removing outliers" before fitting sounds quite arbitrary. My guess is that you have been using a least-square method and that the resulting solution has been shifted towards some spurious data points, so you have removed them before applying again this method. Perhaps, you can try using instead a method that minimizes the L1 norm as it is much less affected by the presence of spurious points if they are located on the same side of the main linear trend.
KK Lasantha - Hello. - H.E. noted that you could try other methods, and you might want to experiment. If/when you use least squares, however, you should test for a coefficient of heteroscedasticity. Error structure is important, and assuming equal regression weights is often one of your worst choices available. The data, however, can tell you -approximately - what it's coefficient of heteroscedasticity looks like, for your sample. The link below gives you a graphical explanation. However, it may be more robust against measurement errors, for future sampling, if you understate the coefficient of heteroscedasticity. - Thanks - Jim
Conference Paper Alternative to the Iterated Reweighted Least Squares Method ...