Can you give more details about your text classification ?? what do you want to achieve ?? which dataset ?? how many features??. If you're trying to get started then Naive Bayes (MultinomialNB), SGDClassifier are the best. Deep learning (Lstm) would any day be my choice. You have to do a lot of cleansing, hyperparameter tuning and build the pipeline.
Deep learning is about ANN being able to have to more hidden layers. More hidden layers means processing capacity requirements are exponentially high. Since the introduction of GPU computing, deep learning has become reality, although ANN are there since 60+ years.
In your case of text classification, you dont need to use deep learning unless warranted. If you share the scenario of text classification, one can suggest the method.
One example that demand deep learning for text classification , to my knowledge could be the News service by broadcast channels.
I hope you have a target for classification, set up a sequence model (Lstm would be ideal) and also do WordNetLemmatizer, CountVectorizer together with tfidf. Once a sequence model is set try hyperparameter tuning!! example as below !! it is very intensive !! try with fewer data and one parameter after the another !!
From my experience with text mining the pre-processing steps are at least as important as choosing an appropriate DL scheme. Depending on what you are mining for, a simple bag of words approach might be sufficient to get a high classification accuracy. E. g. a bag of words can be used as input vector for a ConfNet. More sophisticated approaches such as TF-IDF or tokenization might yield better results for some classification tasks. If you have a large body of documents you might consider clustering as preliminary step to improve performance. Especially with DLs not only the predictive power is important to evaluate an appropriate DL method but also calculation time and ease of interpretation of the output.