It seems the state evolution ode; xdot=f(x,u), is known to satisfy f(0,0)=0, hence f(0,u(0))=0, then can we conclude at any control system, always u(0)=u(x=0)=0, and get the result that limit(u(x)) is zero when x goes to zero.

I think in classical linear/nonlinear control system this is trivial. How do you think?

But my focus is on optimal control. I want to know whether this assumption similarly holds also for optimal control problems (in case of regulation or trajectory optimization or so on)?. I want to know whether it is possible to assume in a typical optimal control problem; u(x=0)=u(0)=0?

More Saeb AmirAhmadi Chomachar's questions See All
Similar questions and discussions