Is there any simple way for obtaining global optimum when you have nonlinear conditions for optimization problem.(Toolbox of Matlab is useful however, it has some disadvantages)
If you want to compute a proven global optimal solution to an optimization problem with nonlinear conditions, then it heavily depends on the type of nonlinearity: in case you have continuous variables only (that is, no integer conditions) and convex constraints, than any local optimum is global already. In case you have integer variables and/or non-convex constraints, you have to use branch-and-bound methods. Here the original problem is split into a sequence of subproblems, each being continuous and convex (maybe even linear).
So much for the theory. Now you're asking for "simple ways" in terms of tools. If you have some money, you can buy a commercial license of the solver "Baron". A cheaper variant is the solver "SCIP" from scip.zib.de, which is free for academic purposes, and open-source. It comes with a built-in modeling language "Zimpl", which is rather easy to use, so you can type in your problems in this language and solve them right away.
It highly depends on your optimization problem being convex or non-convex. For the former, there is a huge theory and variety of methods (gradient descent, Newton's method etc.). If you have problems to code an optimization routine with non-linear contraints, you may, say, use log-barrier to "plug" them inside of the objective, but in general if you prove convexity -- go on with "usual" convex optimization. If the problem is non-convex, the things are never simple as you ask. As Armin pointed out, branch and bound is one possible methodology. Others include particle swarm optimization, Monte Carlo etc. Also an interesting thing is Pincus theorem, but I am not aware of its applicability.
First, are you sure that there even is an optimal solution? If you are, then you probably know a few things about the problem that you can utilize. It is also quite important whether your problem has an explicit expression (rather than a part of the objective stemming from some simulation), so that you can express things like derivatives - or at least know whether such things exist.
But it also boils down to something that some people refer to as the "No Free Lunch Theorem" - one that in complicated words state that there is no algorithm with any guarantees regarding solution time (or perhaps even space) UNLESS you have some nice features in your problem. If you have, for example, a locally Lipschitz continuous function to minimize then you already know something about the worst appearance of the function, and from this you can establish bounds on the complexity of its solution. If you do not, then there currently is no way to tell that your problem can be solved in a predictable number of days ...
I follow Michael and Armin's comments. There is no "best way". The first thing to do is to analyze your problem and see if there is any specific structure or property that can help you decide which method is more suited for what your all aiming at.
First question you need to answer: are you trying to solve a dynamic optimization problem or a static one. In the following, I'll guess that your problem is static, that is, it is either an NLP or a dynamic optimization problem converted into a NLP by a full parameterization method.
I guess that by "nonlinear" conditions you mean constraints. This means that you are trying to solve a constrained problem - this is the second question to be answered.
Third, like Michael said, do you have analytical expressions or your functions or not ?. If this is a full "black box" problem, then you may go to derivative free methods.
If everything is convex and continuous then any local search method will typically lead you to the global minimum since any local minimum is also global. In Matlab you can thus pick the appropriate toolbox (cvx from Boyd's team or fmincon if you prefer using "native" matlab toolbox).
... such questions are not only good for picking the most appropriate solution method, but they are also part of the problem formulation and of the analysis of your problem, which are the two most important steps of the solving of an optimization problem.
Finally, you should be aware that in the field of global optimization methods you have basically two families of methods.
Stochastic (and also metaheuristic, genetic, ...) methods are easier to apply, quite performant and do not require your cost and constraint functions to have some properties like being twice continuously differentiable or something like that. The drawback is that there is no guarantee that you converge to the global solution in finite time. Although there are many papers claiming that the authors method (typically the same than the others, with a twist, ...) outperforms the others, there is no best method for all cases. It is case-dependent. I have been asked to review such papers many times, and still it drives me crazy...
Deterministic methods converge to the solution in finite time. But for the moment they don't scale well with the number of degrees of freedom of your problem, and they generally require your functions to be C2. As a matter of personal taste, I prefer deterministic methods, because they are, in my humble opinion, mathematically sound. But again, this is a matter of taste and of the problem at hand.
I'll finish by mentioning also the "no free lunch theorems"by citing the authors of the paper "No Free Lunch Theorems for Optimization", IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 1, NO. 1, APRIL 1997, pp 67-82: " A number of “no free lunch”(NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class."
You can apply hybrid algorithms. You can can apply combination of gradient and search technique. You can apply first gradient technique, find the solution and then apply any suitable search technique. In order to handle constraints you can take help of Fuzzy technique.
It's rather funny - and also frustrating - to read all the cock-sure posts on which solver you should use, when there is nearly nothing said about the actual problem that Hassan wants to solve!!
The next post from Hassan ought to be an as detailed description of the problem at hand, as humanely possible. We need info!! Only then can we help!!!
Armin's post is not precise: convexity is needed in order to be able to say that any local minimum is a global one - and we do not know whether it is a convex problem.
Actually, the answer depends on the type of probem you have. You can develop a function using matlab programing based on Evolutionary Algorithms and fuzzy logic. That might suite your demand. Anyways, try to put your problem a litle bit clear.