In binary classification of time series what is the accepted margin for measuring accuracy of classification or prediction that permit us to conclude that time series is not accidental, or vice versa?

Or. for example, if accuracy of classification or prediction of one event was a very near accident, 50/50 for example, 51% classified true and 49% false (or even 50.1% correct and 49.9 wrong), is it random or not (this time series consists of a succession of random steps or vice versa)? And how many times should we repeat that test?

More Arash Moradi's questions See All
Similar questions and discussions