You could try running simulations on the models from various types of optimizations, linear, integer, non-linear, etc. The standard assumptions is that coefficients in an objective function are static. After an optimal solution is found, you check to see how far each coefficient can change without affecting the optimal solution. You only do this with a "One-at-a-time" approach. This idea was nice in the 1940's to 60's when computers were big, slow, and few. Now, I can use parallel processing to raise a 10,000x10,000 matrix to the 100th power, using brute force methods, in a few minutes (6).
The new algorithm should look at what happens when each of these coefficients changes by say +/-10%, or the allowable change, and see if the original assumptions hold. (It won't.)
There are also a large number of methods for determining where the "optimal" solution is. Again, most of these algorithms assume one-at-a-time changes. Using the same parallel software, you should be able to generate millions of combinations of variable levels within the given constraints. Most algorithms assume that you only have one optimal solution (not always true) and that you can get there from any arbitrary starting point (again not true). You might might end up at a "local" optimal solution, not the real optimal solution.