There are many classification techniques (ANN, SVM, Bayesian, GP) for classifying data sets. However, which technique is better for classification of imbalanced data sets?
There is a huge literature on unbalanced classification, so it is rather difficult to answer the "best" classification technique. Also, the standard measures of error (e.g. misclassification accuracy) are not a good indicator in case of unbalanced classes, and there is no consensus on a single unbalanced metric, which makes the problem of choosing the optimal model even harder.
A s a general rule, (i) you can use standard classification approaches by properly sampling the dataset, (ii) or you can combine the sampling inside the learning procedure, (iii) or you can use some form of cost-weighted learning. For example in SVMs you can use a different regularization parameter for every class, possibly proportional to the ratio of the classes (this is a standard technique used by many libraries).
To extend on the subject you can try to read some surveys and then move on from there.
[1] Weiss, Gary M. "Mining with rarity: a unifying framework." ACM SIGKDD Explorations Newsletter 6.1 (2004): 7-19.
[2] Sun, Yanmin, Andrew KC Wong, and Mohamed S. Kamel. "Classification of imbalanced data: A review." International Journal of Pattern Recognition and Artificial Intelligence 23.04 (2009): 687-719.
[3] Tang, Yuchun, et al. "SVMs modeling for highly imbalanced classification." Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 39.1 (2009): 281-288.