I would answer yes but from the perspective of generative one and the way of exploiting them.
In particular, in Sentiment Analysis and emotion extraction the LLM finetuned with prompt-tuning or chain-of-thought concept allow to person better or as good as BERT based classifiers.
To not to be unfound, I can navigate you to this study 📃:
https://arxiv.org/abs/2305.11255
As well as our reproductions on more broader investigations on LLMs from reasoning prospects in papers 📃:
Nicolay Rusnachenko Thank you for the supported studies, and I will review them, knowing that I would like to find a suitable analysis technique for texts to improve classification accuracy.
Nice, take your time on it 👌 The text classification is what exactly these techniques were find their application in these studies. Moreover, the prompt messages behind it might be adapted for your specific talks as well as to your set of classification labels
I advise you to take a look at these two papers in which the authors show that the simple neural network can outperform the Bert and other advanced graphic NN. The code is included. The point is why we need complex models while we are able to achieve our goals using simple models.
Article BoW-based neural networks vs. cutting-edge models for single...
Article On the Integration of Similarity Measures with Machine Learn...
Iman Q. Abduljaleel, yes for all the studies that were mentioned, the code is available on GitHub and could be launched in GoogleColab.
I believe that the most accesble for your case, which is text classification, is this framework (recommend to take a skim throught the README.md first):
It provides opprotunities for launching zero-shot learning (ZSL) of the most transformer models (is how you can prompt the class label by asking pre-trained models), and fine-tuning code for FlanT5 in two modes:
Prompt-fine tuning
Chain-of-Thought Tuning.
The similar idea (only with fine-tuning) is within another repository aimed at emotion prediction problem:
https://github.com/nicolay-r/THOR-ECAC
And I am happy to help in the case of any question, don't hesitate to ask!