09 September 2018 3 301 Report

From my experience, the default iteration time of the while loop in micro-controller or PC programming is 1 ms, whatever the language is C, C++, Python, LabVIEW or Arduino. However, the CPU frequency can usually reach several GHz, so the loop time technically could achieve micro-second or even nano-second level. Is it possible to achieve faster loop time in PC-based programming?

However, I note FPGA can achieve faster loop. For example, when I use NI crio-FPGA, I can easily get 20us. Though the computation of each loop is complex, the speed is still very fast. But the clock frequency is only 40 MHz. I know FPGA uses parallel process, but some calculation must be done step by step.

I just guess PC-based programming can go faster someway? Who know how, please?

Similar questions and discussions