I have achieved 93 % accuracy using RoBERTa for Ptacek et al(2014) dataset but for SemEval 2018 Task 3a it is 69% due to being a small dataset compared to Ptacek et al. However I would like to see if it is possible to achieve better accuracy. Mozafari et al(2019) proposed adding CNN after the BERT encoder and reported good accuracy on the hate speech detection task. My question is Do you have any proposition for text classification with transformers so that I can propose something new and get better accuracy at the same time?

Your answers will be appreciated.

More Nazmus Sakib's questions See All
Similar questions and discussions