1.) There are several ways to measure the uncertainty. A simple measure would be the standard deviation. However, there are more advanced methods available to model uncertainty or volatility along with your forecasts models. So you can combine volatility with your forecasting model using methods such as ARIMA(p,q)-GARCH(m,n), ARIMA(p,q)-GJR(m,n),ARIMA(p,q)-EGARCH(m,n) etc. There are many other GARCH family models which could be incorporated into the above model. ARIMA is used to model the mean equation. If you are using any other technique to forecast, you can use that in place of ARIMA. If seasonality is an issue, you need to incorporate that into the model by seasonal ARIMA or seasonal dummy variables.
2. As far as I know, you cannot remove uncertainty, the only thing that you can do , is to minimise the uncertainty by way of improving your forecast accuracy. For this end, you can try various methods to forecast such ARIMA, SARIMA, ARIMA with seasonal dummies, using machine/deep learning techniques, using neural network etc. Then you can compare the forecast accuracy of various models and select the best which usually gives forecast with higher accuracy with less uncertainty.
My primary interest is population statistics, where a 'prediction' from regression is not a forecast. However, I'm sure the similarities here include the notions that you do not want to underfit or overfit. My guess is that there may be a tendency to overfit a model with the original data, and then as time goes on, especially if there are changes in conditions over time, some being abrupt, then the fit really does not apply anymore. If it was overfit, and thus inappropriate, from the beginning, this could make this problem worse. So, I suggest that you keep track of your forecasts vs the obtained results over time, and if you are consistently biased in one direction for a while, you could adjust your model. You could also look out for those abrupt, large changes, and see what might be happening. As for a large variance, and overall fit, you could compare different models, but to avoid overfitting, you might see how well your forecasts work for some test data which would consist of real results not used in estimating your model coefficients.
You might consider the old concept of a "control chart."
James R Knaub Thank you.Yesmi was thinking of using K-fold cross validation in my training and test data but the data is huge.Only one iteration takes hours to complete.Its really hectic sometimes