In general, xnew = x1 + F*(x2-x3) where x2 and x3 are randomly selected. If x2 and x3 are ordered so that the difference vector is applied on a downward slope (for a minimization problem), convergence can be faster, but it can also cause premature convergence which can lead to worse overall performance.
This is the summary of the general discussion I've heard, but I can't find any specific references to people who have done these studies. Can anyone point me to work with "directional" difference vectors -- in both single objective and multi-objective problem domains?
Thanks!