An operating system (OS) in it's simplest form is called a Scheduler. It has a series of small 'tasks' that must be run. As it can only run one task at a time it allows each task a small amount of time to perform its' function then moves onto the next one, running each task sequentially.
Tasks are generally short, sequential bits of code without large decision loops or delays. They each perform a small part of a much larger task each time they run.
At each OS scheduling 'tick' one of the tasks is executed. If the task finishes before the next tick (and it must) the OS sits idle and waits for the next tick before running the next task in the sequence.
If the task takes longer to run than the time allowed then the next (and subsequent) tasks are delayed or the over-running task is terminated prematurely (depending on the OS). Either way, the software doesn't run as expected and the real-time nature of the software is compromised. The computer cannot react in real time to real world events.
Minimizing the tick (making it as small as the longest running task) will maximise the efficiency of the software and minimize the idle time of the computer.
Making the tick time too small (smaller than any of the tasks) will result in unexpected or unintended behaviour.
If you have to reduce the tick time then it is possible make the tasks run in a shorter time. They can often be re-written to perform the same function in less time and I have had to do this many times in the past. For example, If-Then-Else generally takes fewer clock cycles to execute than Switch-Case but the end result is the same.
I'm not familiar with MicroC or VxWorks, but they may have the ability to deal with the over-run or at least warn you of the problem.