I was wondering if anybody has experience using videos on E-prime? Even better, does anybody have any experience using video stimuli in E-prime and EEG/ERP.
Thanks for your reply Diogo Branco! How easy would it be to have a participant view and hear videos and embed event marks within that video on E-prime?
I would say that it's not a very hard task to do. It depends on the person programming the task's experience. If the person is familiar with E-Prime my guess would be in 1 or 2h you would have the task ready (maybe much less time, I'm already considering debugging and whatnot)
Some advice:
- Be careful that some video formats may be too heavy for E-prime to handle, leading to software crashes or audio-video resynchronization. Also, try to compress your video files to small file size while preserving video quality.
- When I've worked with E-prime I've used TCP connection for sending the triggers. I know now that recently, they implemented support for Lab Streaming Layer (https://github.com/PsychologySoftwareTools/eprime3-lsl-package-file). Maybe this could be beneficial to use instead of TCP, because in the future, you could use more sensors without modifying your experiment. By using LabRecorder you can synchronize many sensors using only a source of markers. Here is a video of EEGLAB team regarding Lab Streaming Layer. https://www.youtube.com/watch?v=tDDkrmv3ZKE
Which EEG system are you using?
If have a direct question, feel free to message me.