It's no longer a surprise to realize a wide gap between advances in academic research and practicality in Industry. This discussion is about exploring this gap for a particular domain, which is Time-Series Forecast. The topic has had great many research advances in recent years, since researchers have identified promises offered by Deep Learning (DL) architectures for this domain. Thus, as evident in recent research gatherings, researchers are racing to perfect the DL architectures for taking over the time-series forecast problems. Nevertheless, the average industry practitioner remains reliant on traditional statistical methods for understandable reasons. Probably the biggest reason of all is the ease of interpretation (i.e. interpretability) offered by traditional methods, but many other reasons are valid as well, such as: ease of training, deployment, robustness, etc. The question is: If we were to reinvent a machine learning solution solely for industrial applicability, considering the current and future industry needs, then what attributes should this solution possess? Interpretability, Manipulability, Robustness, Self-maintainability, Inferability, online-updatability, something else?