This is a very broad question. Static optimization is generally attempting to select some values for decision variables (activity variables) subject to resource constraints (how much is available) to achieve an extremal (max or min) value of an objective function. In its most basic form, there be can strong / powerful analytical results exploiting say convexity. Optimization also typically leads us to questions of sensitivity analysis -- i.e. how things might change under small changes of the initial resources. In dynamic form, this may be an optimal control problem (adjust the inputs over time). My sense of cybernetics is that it is more likely to be attempting to replicate or simulate the time varying behavior of a complex system. If the control of that system is embedded in an objective function (like a "score" for how well the system is staying in control), I think there is a strong similarity between the two problems. To be honest though, until I read your question, I always tended to place these two tools in different categories. So thanks for making me think!
Can I put it in this way: static optimization is more likely to focus on the results (including the optimum and corresponding optimal variables. While cybernetic pays more attention to keep the whole controlling process optimal consecutively. Here I got a question: can we just regard the cybernetic problem as a dynamic optimization problem that the objective function changes over time?
The most recent comment (Matthieu Vergne) comes back around to the initial concern that I expressed, so I gave that a positive vote. Optimization (static or dynamic) deals with efficiency and cybernetics is, in my view, more about understanding complex systems. But the question, as I already commented, did prompt a consideration of where the ideas may intersect. For example, variants of optimization can certainly be in the realm of optimal control. Variants of systems modeling can reward an objective measurement of staying in control (encode the merit in an objective function).
Since this is a very thought provoking, we should see if this question can be viewed and commented on in a wider forum. Does the original question have a particular problem framework where we can comment?
Thanks for the answers from Prof. Morton E. O'Kelly and Dr. Matthieu Vergne.
This question actually arises from the numerical direct methods for optimal control. In such type of approaches, the controller u(t) is usually discretized piecewise. Hence the optimal control problem is transcribed into the nonlinear programming problem. In such case, we can say the control problem turns into an optimization problem. On the other hand, numerical optimization problems, heuristic or not, is solved heavily based on the search operators the corresponding algorithms that use. This process can be imaged as the initial solution evolves itself to the optimal solution under the control of the search operator. Hence, the optimization problem can be in turn regarded as the optimal control problem with the only objective of minimal terminal time.
With the considerations above, I feel that optimization problem and optimal control problem are closely linked.
If this is true. For more general control problems, we always have some specifications for the system, and result in the errors between the desired output and actual output. If we regard the errors as the objectives to be minimized. It seems that the generic control problems looks quite like optimization problems somehow.