Priority perhaps is not a good abstraction for time scheduling. If you have a mor prioritary event, use a sampling period shorter and attend it before any other one. We were thinking about this many years ago when designing RT-DESK (rtdesk.blogs.upv.es) Priority was an artificial concept that introduced a lot of noise. Now everything is simpler in our implementation.
Your question needs to be rephrased for clarity. Please define what waiting time is. Is the time a job must wait until it is scheduled? What is meant by efficient scheduling? There is no single answer to this question - so you will have to define it. Bringing more clarity to your question will help you.
The trick is to separate the logical behavior from the timing behavior. Which includes that you don't schedule explicitly in the application (as this is likely not to work anymore on another hardware with different timing characteristics). The best, most general en simplest approach is still RMS (Rate Monotonic Scheduling) whereby priorities are assigned according to the execution frequency of the task. These priorities can be assigned at compile time and the system will meet its deadlines in most cases without having to "adjust" for deviations. Time-outs can be used as well to take care of the occasional issue (typical case: I/O does not respond). What Ramon describes comes close to RMS but it looks a bit more heuristic as it is described.
One can of course also use a timer base and implement an application specific timer scheduling list. This has some advantages but it makes things a lot more rigid and the only way to recover from blocking is to restart. Priority based scheduling has a major advantage that one can guarantee that a higher priority task will still run (e.g. used to monitor the application and if needed take corrective action) and that the scheduling is triggered by "events", which means that it easily absorbs the unavoidable jitter that every system has (especially when using advanced processors whereby cache, dma, etc. make the timing behavior stochastic.
I agree with Eric. We avoided this jitter by separating the real time from the simulation time and let all the app to work only in simulation time but synchronized to real time. Sampling freqs may be assigned at the beginning and changed later, during the simulation time on demand. The higher the sampling frequency you need for an object in the system, the higher the "priority" it has. This is a self regulated system that manages implicitly "priorities". You can read more on some papers we wrote in the past looking at my profile in linked in. If you need any clarification about the way it works, do not hesitate to ask for it.