Your question itself is not specific enough. As Joseph Alexander Brown mentioned, how do you measure student's behaviour? From my understanding, behaviour itself is not measurable but derived from a set of observed measures by conducting factor analysis. Similar to how IQ scores was derived to assess human intelligence.
Then, question on the "best" algorithm for a classification task. There is none that can fit all problems. So it depends on the problem that you are trying to solve. You may test and compare few classifiers before you can claim that classifier X is "best" in solving a particular type of problem with Y assumption(s).
1. Whether it is binary classification problem or multi-class i.e. how many behaviours you want to consider? only in one context or in different contexts?
2. Types of features/ feature selection.
for example if you want to classify students in two classes 'technical enthusiast' and techno-reluctant, any statistical method to classify will work, since most of the prominent features would be numerical or convertible to numerical.
Hence I advise you to formulate your problem precisely then this forum would be able give better solution.
What does your data set looking like? How are you measuring the behaviors?
K-NN works well given a vector of points in a N-dimensional continuous space. It makes the assumption that like vectors are like classifications. If this is the case - then you can attempt it.
Otherwise, I agree that we need more information about the problem in order to give suggestions on methods.
For two and multi-class classification I found generally the best the Random Forest by Breinman. Simple, better than k-NN, and easy to use is my IINC. In both cases it does not matter if features are continuous, binary or ordinal. If you need software, contact me.
I tried the random forest applied by Marcel Jirina it is the best compared to other algorithms in general, but as I always say there is no best classifier. even random forest sometimes does not give the best performance on some problems or some other data sets, this complies with the no-free-lunch theorem.
Your question itself is not specific enough. As Joseph Alexander Brown mentioned, how do you measure student's behaviour? From my understanding, behaviour itself is not measurable but derived from a set of observed measures by conducting factor analysis. Similar to how IQ scores was derived to assess human intelligence.
Then, question on the "best" algorithm for a classification task. There is none that can fit all problems. So it depends on the problem that you are trying to solve. You may test and compare few classifiers before you can claim that classifier X is "best" in solving a particular type of problem with Y assumption(s).
As I mentioned earlier, you can't claim SVM as the "best" without proof. How? By comparing SVM with other algorithms in your evaluation. For example, you may claim SVM is "best" in classifying students' behaviour if SVM achieve the highest classification rate among few classifiers.
By sharing information on social networking, I assume you are referring to sharing images/text/files/geotags etc?
Following are few sample publications (I think similar to what you are doing but without classification?!)
Dear Khaled, You may find the following publication by IEEE useful:
S. Vongsingthong, (2014) Classification of university students' behaviors in sharing information on Facebook, 11th International Joint Conference on Computer Science and Software Engineering (JCSSE), 2014 . DOI: 10.1109/JCSSE.2014.6841856.
The author applied 4 classification algorithms, namely, KNN, Decision Tree, NaïveBayes and SVM to explore the opportunity of target products. According to their findings, SVM outperform the others with accuracy of 88% due to its distinctive characteristic in handling imbalanced data. Besides these classifiers you can also use ANN, for sure!
The Naive Bayesian classifier is based on Bayes’ theorem with independence assumptions between predictors. A Naive Bayesian model is easy to build, with no complicated iterative parameter estimation which makes it particularly useful for very large datasets. Despite its simplicity, the Naive Bayesian classifier often does surprisingly well and is widely used because it often outperforms more sophisticated classification methods.