As with any general solver, the specific details are highly dependent, hence, there's rarely an easy answer. Predictor/correctors are generally based upon some tolerance defined by the user and the algorithm is forced to loop back 'round, executing with ever smaller step size, until the desired tolerance has been passed. In other words, I would suggest designing your step size to vary from large to small with some sort of criteria being tested as the step size is sequentially reduced. What form of criteria to be used? That depends upon you, but some suggestions are the magnitude of the difference between successive estimates, the difference between expected and predicted values and the list goes on and on... Most numericists will use the infinity norm to measure the distance in the experimental space and define some tolerance sought.
Hope that is of some help... Below are a few references for Kalman filters and step sizes for the corrector.
After reading some of the papers concerning your field...you need to be aware of multiple maxima when doing your search for optimization. This can be tricky, indeed, but you need to know a priori roughly what value would place you in the neighborhood of the maxima of interest, then the algorithm would converge to that maxima of interest.
Yes, i am aware of the multiple maxima that occurs when the shading effect happens. I am taking in consideration this fact and i am currently working on it thanks to the orientation and links that you gave me and also some researches that i am doing on my own.