The performance of an algorithm (Deep learning or any other) has absolutely nothing to do with the format of the data. If you use a toolbox, you just have to ascertain that the toolbox can read csv of text data. Most toolboxes do, of course, but if your toolbox does not, it is your job to put your data in the appropriate data format.
The performance of an algorithm (Deep learning or any other) has absolutely nothing to do with the format of the data. If you use a toolbox, you just have to ascertain that the toolbox can read csv of text data. Most toolboxes do, of course, but if your toolbox does not, it is your job to put your data in the appropriate data format.
It is almost impossible to suggest a ML model based on the name of the dataset. The performance of the model depends on the data you are using. CSV or txt are data formats. Try to explain what's on the dataset.
DNN might or might not work for your data. It's impossible to suggest anything without knowing the data.
In DNN you need not to select features but in traditional ML you so have to select the features yourself. However, you need to have sufficient date and proper model to build an effective DNN.
By contrast to some answers given above, I would definitely not venture to advise you to use this or that algorithm without having the slightest idea about (i) the available data (except that it is in .txt format, which is completely irrelevant to the choice of the algorithm, as mentioned in my previous answer), and (ii) what you intend to do with it. So let me ask two sets of questions.
1) How small is your EEG dataset? I mean: how many subjects were recorded, how long are the recordings, at which sampling frequency were the recordings performed, how many channels were recorded ?
2) What task do you intend to perform by machine learning? Computer-aided diagnosis? EEG-based brain-computer interface? Artefact rejection? Should the task be performed in real time?
A caveat: there is a huge literature on machine learning for EEG data, and a huge amount of hype on deep learning for EEG; if your data set is small, you should just forget about deep learning anyway.
1) In general SVM is better for text data: https://medium.com/@bedigunjit/simple-guide-to-text-classification-nlp-using-svm-and-naive-bayes-with-python-421db3a72d34
2) Deep learning procedure is much better for the text data: http://openaccess.thecvf.com/content_iccv_2015/papers/Tian_Text_Flow_A_ICCV_2015_paper.pdf
Reena Popat stated clearly above that the data are EEG data. EEG (electroencephalography) data are electrical signals recorded from electrodes placed on the scalp of subjects; they reflect the electrical activities of groups of neurons. Therefore the data is numeric, not textual. It is perfectly legitimate to provide numeric data in txt format (most of the numeric data available through the Internet on the Covid epidemic, for instance, are provided in text format). It is not clear what problem Reena Popat wants to address, but is is absolutely clear that it is not one of text classification, natural language processing, or the like.
Text data can be processed using intelligent technologies. But we need software systems that provide this: http://lc.kubagro.ru/aidos/Works_on_ASK-analysis_of_texts.htm
Both algorithms are good for CSV data, but it depends on your data if you have enough data so deep learning is better than simple machine learning. otherwise, simple machine learning like SVM or ensemble SVM is good for you.