The optimal solution of an optimization or optimal control problem must not lie in the interior of the feasible region in geeral.
Take the simple problem:
min f(x) with f(x) = - (x-1)^2 +1 on the interval [-3,4]. The global minimum is in x=4.
In addition there is a local minimum in x=-3. The same situation may be present for finite and infinite dimensional optimization problems.
If you have a time optimal control problems in mind, then indeed the optimal control may lie an the boundary of the admissible set, for example, if the control function
enters objective and state equations linearly. However, this is also not true in general.
Depending on the other side conditions the optimal control may be singular
intervalwise, i.e. it lies intervalwise in the interior of the set of admissible controls
while other subarcs are bang-bang, i.e. lying on the boundary of the set of admissible controls. In case, the control enters objective and side constraints nonlinearly, everything may happen.
Hence, this has nothing to do with time or convergence, it's just the optimal solution
@Pesch: the local/global minimum points of f that you mention are actually swapped.
@Kandaswamy: by definition, the optimal solutions of an optimization problem can occur at every point of the intersection between the domain of the cost function, and the set defined by all the constraints on the independent variables of the cost function. In some cases the cost function is convex or concave, then depending on the specific optimization criterion (maximum rather than minimum) the solution is found at the boundary of the feasible set.
An extreme point, a vertex, and basic feasible solution (BFS) are all equivalent (a very important theorem in Optimization and Linear Programming). What is an extreme point? A nonconvex combination of other points. What is a BFS? In an LP it is when you get exactly n tight constraints. What is a vertex (in this context)? It is a unique maximizer/minimizer. All three definitions can be proved to be the same.
If you look at a boundary of a feasible region, the boundary is no different than moving along the edges of the convex region to one of the points (with respect to the objective function, if bounded and feasible). When do these occur? At the boundaries. They supply a lot of handy properties that are exploited in many disciplines, especially in the design of algorithms that employ linear programming as it usually leads to a graph theoretic result that is easier to implement.
To make a clear summary of points already raised: In linear and convex optimization, where all equations and inequalities and functions are linear and convex and admissible domains are polytopes or convex sets, then the set of optimal points always contains corners of the boundary. It may contain entire facetts of the boundary, but a corner point is guaranteed. That's why the simplex algorithm proceeds from corner to corner.
All other, nonlinear nonconvex optimization problems can have their optimal point everywhere.
@Lutz Lehmann: For linear programming this is true. However you need a concave objective function on a convex domain to conclude that an extreme point of the set of optimal solutions is also an extreme point of the feasible set. For convex objective functions this is not true in general as min x^2 over [-1,1] shows.
What type of optimal solution are you looking for? Suppose that you have a non-linear cost function and a non-linear revenue function. You can have multiobjective optimization. That makes optimization a bit weirder.
It could be argued that most *practical* constrained optimization problems should have an optimal solution at the boundary of the feasible set (although not necessarily an extreme point), since the objective (minimizing cost, maximizing revenue, etcetera) will drive the "production" in some direction that would strive towards plus or minus infinity of the objective value unless there are boundaries of the feasible region to stop it. (The capacity of production provides one such natural boundary.)
The optimal solution to a given LP in standard form can lie in the interior of the feasible region, however, in this case, the objective function will have a constant value over a feasible domain.
a clear explanation of why the optimal solution of a linear program is in the boundary points of the feasible region is in https://www.whitman.edu/Documents/Academics/Mathematics/lewis.pdf
1.- [Definiton 3.3 page 18] The feasible region of a linear program is a polyhedral set (convex set)
2.- [Definition 3.4 page 18] An extreme point is a corner point of the feasible region
3.- [Theorem 6.1, page 24 ] The collection of extreme points corresponds to the collection of basic feasible solutions of a linear equations system.
4.- [Theorem 6.2, page 24] If an optimal solution exists, then an optimal extreme point exists.
The simplex method moves from extreme point to extreme point, looking for the optimal solution, and Klee and Minty showed that Simplex method has an exponential behavior (Article A Simplex-Cosine Method for Solving Hard Linear Problems
), but Danztig and Thapa (in their Linear programming book, page 31) argue that many thousands of practical problem can be solved in a reduced time.
because humans are greedy and want the top of something or the bottom of another thing ... like max profit leading to the highest boundary or the lowest cost leading to the lowest boundary.