01 January 1970 2 8K Report

In both training and evaluation, a long input sequence can be processed as a whole in a temporal-convolutional-network (TCN), instead of sequentially as in RNN. Unlike in RNNs where the predictions for later timesteps must wait for their predecessors to complete, convolutions can be done in parallel since the same filter is used in each layer. An added benefit of TCN can change its receptive field size in multiple ways. For instance, stacking more dilated (causal) convolutional layers, using larger dilation factors, or increasing the filter size are all viable options (with possibly different interpretations). TCNs thus afford better control of the model’s memory size, and are easy to adapt to different domains.

Unlike recurrent architectures, TCN has a backpropagation path different from the temporal direction of the sequence. A dynamic receptive field size provides flexibility.

Why are Parallelism, Flexiblity, and Stablility important features for using machine learning to analyzes sleep & vital signs?

More Evan Gertis's questions See All
Similar questions and discussions