There are optimization problem with an objective function and NO constraints - "unconstrained optimization".
There are problems that are defined only in terms of constraints and NO objective function - "feasibility problems".
And there are problem where there is an objective function AND a set of constraints.
All these are important subsets of what constitutes the world of mathematical optimization problems/models.
One can quite often add that to be a "real" optimization problem there ought to be a large amount of freedom (otherwise you may no freedom to choose your variable values), which means that there should probably be more variables than constraints - if I may be so vulgar and "non-mathematical". :-)
Typically equality constraints remove a degree of freedom. So you would expect that tehre will be at most as many such constraints as there are variables. An exception might be in situations where the problem is automatically generated in some way that might have redundency in the constraints (as happens sometimes with linear programming problems).
Inequalities are a different matter, and there may be considerably more such constraints than there are variables, e.g. in some linear programming problems.
If your constraints are linear you can solve a simple linear program in order to test the possibility to remove all redundant constraints. See, for example, the paper here:
As long as the constraints you wish to add are inequality constraints, there is no need to consider the number of variables in your problem. (By way of example, just think of a linear program. As long as you are dealing with inequality constraints, you can either have more or less constraints than variables.)
Indeed there is no reason why there should be any correlation of any nature between the number of constraints and the number of variables. For real-life optimization problems (e.g. optimization of processes, of energy systems, ..., i.e. not only of a numerical function in the mathematical context), the number of constraints can be quite large (physical limitations, operating constraints, ...) while the number of manipulated inputs (i.e. the "variables" or the degrees of freedom) can be lower (a lot being fixed by the "recipe", which you can see as a set of straightforward equality constraints fixing thus the values of several variables).
On the other hand, the number of constraints (and their nature) can have a big impact on the existence of a solution and/or on the capacity of a numerical solver to find the solution.
A1st example was already discussed below. If you have more equality constraints than variables, and none of these constraints is redundant with one of the others, then you won't find any solution as the vector of the equality constraints may not exhibit a feasible set.
This not exactly the same for inequality constraints. There are supposed to be easier to satisfy, but you can face situations where a few inequality constraints (this can be lower than the number of variables) makes the feasible domain shrinks so much that there are no solutions anymore
There are also cases in which the feasible set is empty, and we perhaps we even know this; the goal could then be to find a "least infeasible" vector of variable values, where "least infeasible" could be formulated using several possible objective functions. Consider, for example. that we have a system of linear inequalities "A x
The number of inequality-type constraints need not to be limited. But it is always worthwhile to remove those 'inactive' (i.e. those obviously satisfied, when other constraints are satisfied) - this will make things easier, less computationally expensive. In this respect I side with Michael Patriksson and Joachim Arts. The real headache happens when some variables have to be discrete (integer, half-integer or similar).
There is a lot of good suggestions here. However, it is of a great importance to have a careful starting point in order to avoid gratuitous complications.
When modelling a problem using mathematical programming, we associate variables sets to the natural decisions to be undertaken to have solutions. Afterwards we associate constraints sets to the natural limitations without which a solution is no longer possible. Until here it is not a matter of how many constraints for a given number of variables; the question is rather to be the most natural as possible in order to fill the gap between the model and reality.
However, especially in discrete optimization, when decision-variables definition has not been thought out, the following unwanted situation can occur: to model some natural constraints requires adding redundant decision variables. Then these redundant variables have to be controlled by other constraints to keep model coherence.
One aspect of the art of modelling consist in fact of defining the most suitable decision variable sets that spare us such a situation.
In constrained optimization problems, you need to include constraints that allow you to calculate the decision variables, hence the objective function, correctly.