Dear M. Venkata Subbarao , convolutional neural network models like googlenet and alexnet were developed for classification of individual, independent images, where the model learns an internal representation of a two-dimensional input, in a process referred to as feature learning. They were not developed for the processing of time series of images.
If you are wanting to classify individual observations of a given signal, this same process/networks can be harnessed on individual, independent one-dimensional sequences of data, such as in the case of acceleration and gyroscopic data for human activity recognition (an example below). The model learns to extract features from sequences of observations and how to map the internal features to different activity types.The benefit of using CNNs for sequence classification is that they can learn from the raw time series data directly, and in turn do not require domain expertise to manually engineer input features. The model can learn an internal representation of the time series data and ideally achieve comparable performance to models fit on a version of the dataset with engineered features.Look at this example with code below:
If you, however, want to continuously observe and classify events in a time series, image classification CNNs are not adequate. There are other models, such as LSTMs, however, that can do this.