Let's say J(x,y) is a continuous function in the closed sets of x € [x_min x_max] and y € [y_min y_max]. How can I show that min J(x,y) is independent of y?
A method such as gradient decent might help since it uses differential of function. I'm not sure, but if update function for that parameter is small by its formula, it shows that minimum value is independent of that parameter.
There might be some confusion of the terms "parameters" and "variables" here, ..but as Andrey above said, simply minimize J( ) only over the interval [xmin,xmax ] by treating the variable y as a "parameter". If your solution (whether a singleton or a larger set) is independent of the value y (i.e. "y" does not appear in your expression for the optimal x*=... ), then you have the desired result. It might also be that you may not obtain x* analytically, for example if the FONC is a closed form equation. Still, if "y" does not appear in the equation (or can be eliminated by some smart algebraic manipulation) you again have the desired result. Now all of this argument is limited by the actual properties of J (is it differentiable, convex, quasi-convex ?).
I think that I need to make some clarifications on the problem.
both x and y are free variables. I plotted how cost function looks like for different values of x and y. It looks like minima is independent of y. No matter what y value is, the minima is always the same x value.
A simplification (may be naive) of the problem lead us to the cost function has to be a constant (the minimum value of J) along the line x=x* where x* is a point for which it is assumed there is y* such that minJ=J(x*,y*). From this a system of equalities should give you the way to check your thesis. It could be as follows if we assume diff. for J: take x* such that
nablaJ(x*,y*)=0
for some y* so that (x*,y*) is a minimizer
Then check
partial_{y}(J(x*,y))=0 for any y (not only at y=y*)
Well, if I understand correctly your question, if the minimum of the function J(x,y) is independent of y then d(J(x,y))/dy = g(x) or 0, in [y_min, y_max]. If you have an expression for J(x,y) then you can show that this is the case just by taking the partial derivative with respect to y. This holds even if you have constraints (e.g. equality, inequality that are either linear or nonlinear). In the constrained case however, you only need to show that J is independent of y within the feasible region, i.e. if d(J(x,y))/dy = h(x,y) outside the feasible region and d(J(x,y))/dy = g(x) within the feasible region then again your minimum is independent of y.
If you don't have an expression for J, namely J(x,y) is some black box for example implemented in compiled code etc, then you'll have to explore the neighborhood of the minimum and see if there is variation when you change y - alas this way you don't have a proof but only an indication.
P.S. In general you have to show what Andrey Krasovskii mentions.
Andrey is right. This obviously translates to the function
G(y) = min_{x \in [xmin,xmax]}J(x,y) being constant in [ymin,ymax].
The above holds if and only if your desired condition holds.
This is NOT equivalent to saying that J(x,y) is constant in y in [ymin,ymax] (even though of course, the latter implies the former).
For each z in [ymin, ymax], let x*(z) denote the x-value minimizing the function J(x,y) when y=z. This value satisfies J_x(x*(y),y) = 0 (J_x denoting the partial derivative of J w.r.t. x, assuming not only continuity, but also differentiability of your function, which may or may not hold) if x*(y) is in the open interval (xmin,xmax), otherwise, the value J_x will simply be positive or negative depending on which end of the interval your optimum lies.
Now, you need to show that G(y) = J(x*(y),y) = constant for all y in [ymin,ymax].