Hello,

I am currently working on a project with a colleague using a Brain-Computer Music Interface (BCMI) to generate music from EEG signals. Affective states will be recorded with EEG signals and sent to a generative music algorithm. A couple of questions I had about the design:

1). We plan to use Emotiv Epoc+ (14 channels), does anyone know any way to run raw EEG data on EEGLAB in real-time, or must EEG data be recorded with external data acquisition software, and analyzed separately?

2). To run the generative music algorithm, is it necessary to train a classifier to model emotion? Can a SVM or random-forest classifier be used to classify emotion from EEG signals, which can then be fed into the generative music algorithm? Or is this step unnecessary?

3). We plan to use the DEAP dataset for emotion calibration. However, one difference my colleague and I had was whether:

  • Pre-recorded EEG data from the DEAP dataset can be sent directly into the generative music algorithm? OR
  • Is it necessary for participant's EEG data to be recorded, while being instructed to view items from the DEAP database to gauge their affective brain states?

DEAP dataset

http://www.eecs.qmul.ac.uk/mmv/datasets/deap/readme.html

The generative algorithm is inspired by Ehrlich et al. (2019), and is designed to generate sounds reflective of the user's affective state.

Article A closed-loop, music-based brain-computer interface for emot...

Similar questions and discussions