Dear Najla Matti , convolutional neural network models for image classification were developed for classification of individual, independent images, where the model learns an internal representation of a two-dimensional input, in a process referred to as feature learning. They were not developed for the processing of time series of images.
If you are wanting to classify individual observations of a given signal, this same process/networks can be harnessed on one-dimensional sequences of data, such as in the case of acceleration and gyroscopic data for human activity recognition (an example below). The model learns to extract features from sequences of observations and how to map the internal features to different activity types.
The benefit of using CNNs for sequence classification is that they can learn from the raw time series data directly, and in turn do not require domain expertise to manually engineer input features. The model can learn an internal representation of the time series data and ideally achieve comparable performance to models fit on a version of the dataset with engineered features.
If you, however, want to continuously observe and classify events in a time series, image classification CNNs are not adequate. There are other models, such as LSTMs, however, that can do this.
Many Thanks Prof. Aldo Von Wangenheim for your clarification, I am inquiring about the signals of brain, when we have readings for several electrodes at the same time for example 16 electrode or 24 electrode ... etc. We may somehow read them in a matrix as you read the image data then we may come out with these networks and perhaps not.