Time delay is inherent in any control loop. If it's very significant or disturbing, then model it as e-ts , where t is the delay in time. At the end of all calculations, take inverse laplace transform to obtain t, hence determine how much delay in time is involved. When this is known, insert a feed-forward control loop at the output of that very feedback control loop to compensate for the delay experienced due to the feedback loop.
when the delay is constant the PADE approximation is valid (e(-Ts)). on the other hand, if the delay be time varying, how we can model this? for example (u(t-T(t)), where T(t) is delay (time-varying delay). how we can construct u(t) according to the delay T(t)? a relationship like pade.
For analytical purposes (i.e. paper and pencil calculations) you will probably have to overbound. Let's assume you want to calculate the phase-margin of the closed loop. What's the worst case? The largest possible (or probable) time-delay.
I mean the delay is time varying. frequency domain for constant delay. the largest assumption degrade the performance. for example, if we consider the delay to be 5 s as a upper bound, when is 1 s in real time, the performance is low. i need an algorithm to estimate the time delay from sensors or such a like things
Is the time delay in the system or in getting the measurements? Because the Pade approximation defined in the above answers is for delays or dead time in the system, not the measurements.
Delay time may (or not) be of first order. The magnitude of the error that results from identifying the observed/measured variable (e.g. chemical concentration (affected by first order delay) with the corresponding actual (real) concentration, can be estimated accordingly eq. 2.58 of the following reference (p. 25):
Thesis Controlo do Oxigénio Dissolvido em Fermentadores para Minimi...