most of the gradient optimization techniques carry out this job automatically. The gradient techniques concept is based on varying the variables by calculating the rate of change of the error function with respect to the variables, and its an easy job and simple, it just carry out the value of the error function for the value of the variable then perturb the value of the variable and calculates the error function. The difference between the two error function values divided by the difference of values of the variable represents the gradient required.
is your function differentiable? if we have f(x1, x2, .... ) you can get f'(x1), f'(x2)... where these are the partial derivatives of f w.r.t. x1, x2, ... the gradient is then a vector made up of these partial derivatives
the previous answer is OK, but you would have to do the perturbation for each x1, x2, ... separately and in turn
You can actually use automatic differentiation if you have access to the source code of the objective function. The derivatives calculation are less sensitive to noises in your functions.