It depends to the problem to be solved ! creating patterns, models, relationships, clustring, calssification ...
But, in computer science, I think the popular and well researched methods in industry are association rules (unsupervised learning) and decision trees (supervised learning).
In my viewpoint, all analyses, model building, and methods of data mining live and die with the data preparation. If there is 5% error in the data, how would you ever determine the amount of error? The standard data profiling tools in commercial software will not determine the amount of error or how to effectively assist in cleaning up the data. If there is 5% error in the data, are there any analyses that you can do with the data?
The issues with data-quality/data-preparation:
1. You need to get information/data into computer files accurately.
2. You need to find and correct for duplicates within and across files effectively.
3. You need to fill in missing values in a principled manner that preserves joint distributions and so that edit (business-rule) restraints (no child under 16 is married, etc.) are satisfied.
There are very good books b y Redman, English, Loshin, and others on data quality that deal with issue 1 effectively. Redman (former Bell Labs) in his book actually mentions and describes the Fellegi-Sunter model of record linkage (J. Amer. Stat. Assn. 1969) and the Fellegi-Holt model of statistical data editing (JASA 1976) that at a top level address issues 2 and 3, respectively.
If you have duplicates (or lack of coverage) in your list of entities, then your entire analysis can be seriously compromised (likely destroyed) a priori.
Here is an example of two records with name and address information.
Name Address
Dr. John H. Smith, M.D. 123 East Maple Avenue, S.W.
J. Harry Smoth 123 E. Mapel Ave, Southwest
How do you break up and compare effectively the components of the names and addresses with some suitable scoring mechanism?
How do you compare 'Smith' and 'Smoth' and 'Maple' with 'Mapel' that effectively deals with the typographical error?
How do you standardize the components (i.e., give consistent spellings)? You possibly change 'S.W.' and 'Southwest' to 'SW' and change 'East' and 'E.' to 'E' and put the outputted components into locations that can be easily compared.
Have a look at KNIME (https://www.knime.org/) and the workflows they provide for e.g.: marketing. Software is free and supports virtually all data mining tasks anyone needs.