sampling time is very important ...because the processor (i.e micro controller or DSP) can able to execute the control algorithm and made a decision as fast as the real time system working frequency...Sampling time is used to find the correct execution time of control algorithm in the processor .....The processor execute the control algorithm within the sampling time and wait for the next sample (for example a motor speed) for executing the control algorithm and gives a another improved decision to the real time system....
In my opinion, there is not much 'choice' in sampling period. Any sampler has a definite set of possible sampling periods (depending on its clock). It is for the designer to choose the appropritate microcontroller which is able to cater for a desirable sampling time.
As mentioned by the researchers before me, too high a sampling rate leads to unnecessary computational overhead and too low a sampling rate leads to loss data fidelity. You would need to choose a samplng time such the chance of data loss is minimised and at the same time leaving enough time for the microcontroller to process the data within two sampling instants.
If you have designed your controller in continuous time (the so-called emulation approach) you will need to select a sample rate of at least 70 times the bandwidth of the closed-loop system. Anything less will result in phase lag which degrades the stability margins (and response). You can choose lower sample rates, but it is then wise to do a discrete time design of the controller which takes account directly of the phase lag.
finding the sampling rate is sometimes a big challenge in control when the system under control is nonlinear and specially nonsmooth! actually a proper answer to your question is that for linear system Nyquist rate and oversampling Nyquist rate would be an easy chosen and working rate however for a nonlinear system , you might get into trouble to find the maximum suffieceint sampling rate!
Sampling too fast (small period) has the downside of not just overloading the processor(s), but also exacerbating the effects of jitter. Furthermore, if you run an RTOS, scheduling your control tasks so that periodic sampling and actuation is ensured becomes very difficult, especially if you have safety constraints. For 1 processor devices, such as most of the MCUs, it is still manageable. However, as your software scales and you run more computationally intensive tasks (computer vision, classification, EKFs, MPC, etc), then so does the need for more powerful , multiprocessor systems with shared memory and caches (RPIs, SoCs). Scheduling on such systems is very difficult, due effects such as cache misses, memory latency, etc. Add to the mix a bunch of communication protocols, such as CAN bus, and you add an extra layer of delays and uncertainty in your control loop, making the job of scheduling to achieve small sampling periods infeasible. Increasing the sampling period helps in this regard, because it makes the job of scheduling easier, and ideally one would estimate a tight WCET bound on the controller to determine the period. However, in practice this bound is loose and you end up with a large period.
Sampling too slow (large period) on the other hand, has the downside of degrading the controller's performance. Not only do you decrease its response time and introduce extra dead-time delay, but you also risk destabilizing the controller.
That's why selecting a sampling period that satisfies the scheduling and control requriements is such an important and difficult task.