It depends on the behavior of the problem not only the dimension of the problem. For example if you have differentiable, linear ...problems you may use exact methods like simplex algorithm and so on. If you have non-differentiable continuous problem you may use any swarm intelligence algorithms like PSO . . . So, it depends on the type of the problem you want to solve.
I would like to know which objective you have, and whether there are particular constraints in the problem. At the moment the only thing we know is that there are 1000 parameters, but nothing about whether they are constrained in some way (are they free of sign or non-negative?) or if the objective function that you wish to use should have some particular property. With more info at hand I am sure we can provide something. :-)
The difficulty of an optimisation problem is mainly dependent on;
Dimension of problem
Constraints on decision variables
Degree of multimodality (number of local optima)
I recommend you to use well-established optimisation algorithms such as PSO. Very high number of function evaluations should be allocated to get a suitable solution.
First of all, we need more information about the problem you want to solve,. What is the nature of variables (discrete, continuous, binary, etc.)? What is the nature of your objective function (linear, non-linear, stochastic, etc.)? Are there any constraints that your variables are supposed to satisfy? If yes, what kind of contraints are they?
If you're dealing with linear programming, then I would advice the use of algorithms like the simplex. Take a look at the column generation method based on the Dantzig-Wolfe decomposition. It is especially tailored for linear problems with a very large number of variables.
If your problem has been proved to be "difficult" and is of large size, you should probably avoid using exact methods. As for what method would be better to solve the problem at hand, the answer is that there is no method that is considered better than another one for all problems. No free lunch theorem.
We would need more details about the specificities of your problem to give you more advices.
Please, give as concrete description of the problem as you can. Things like: nature of the objective function, nature of constraints and number of constraints, discrete or continuous, relationships between constraints and objective function .... etc
if you are in dimension 1000 you are probably going to run into heavy problems except if your optimization program is a linear one. In this last case you will be fine applying any algorithm like simplex , interior point etc. But if you are out of linearity then, even if your program is well behaved (ie differentiable etc) you may have problems.
Then I recommend using non gradient optimization algorithms like Nelder-Mead and Hooke-Jeeves algorithms, which are by the way encoded in R (see the package dfoptim for instance). These are surprisingly efficient, and you may recreate every kind of mathematica program: free or constrained. You only need to guess a seed or initial value. Good luck!
If you have a nonlinear function with 1000 continuous parameters, then, as others have said, you should not attempt to use a gradient based method. Instead you need a global optimization algorithm. This is a hard problem, but not impossible, but the good news is that there are many metaheuristics available (in a variety of implementations, so likely you will be able to find one that suits your needs).
As has been pointed out, there is no single "best" method; and their performances are highly problem-dependent. Methods like PSO, GA, Differential Evolution... may be appropriate for your problem.
A suggestion: in our group we have developed a method that performs well when compared to the aforementioned techniques; it's based on a metaheuristics called Scatter Search, and combines it with local searches which help in accelerating convergence. The resulting method is called eSS (enhanced Scatter Search) and it is available in several implementations. A recent one is as part of the MEIGO toolbox (http://www.iim.csic.es/~gingproc/meigo.html), which can be used in Matlab, R, and Python, so it's likely that some of them will be easy to use for you.
We have tested eSS with very large problems (>1000 parameters) and outperforms other methods (although I don't intend to convince anyone that it's always the best method; there is no such thing)