For example... if disk head is at 100 and disk requests are 55,58,39,18,90,160,150,184, and disk rotation delay is different for 55, different for 58, and so on..
As I recall, disk rotational delay or latency is usually specified a 1/2 the disk rotation time, presuming that as the average time required to move the requested disk sector under the read/write head. In reality, there are many factors that can affect the actual rotational delay necessary to complete an individual disk request. For example, there are other potential device and requester delays that can affect the minimum time that between completing a preceding request and becoming ready to execute the next request. depending on the location of the disk segment being requested and the position of the r/w head at that moment, there could be almost 0 rotational delay, or the delay may be nearly equal to the device rotation time.
In modern cached disk systems it's more likely, especially in the case where the requested disk segment is already in cache memory, that no actual rotational delay will be required to read and in some conditions write the data to/from cache memory. In some conditions, however, all write requests may require rotational delays, while most read requests will not... Also see http://en.wikipedia.org/wiki/Rotational_latency#Rotational_latency.
Apart from the average delay, how can we calculate the degree of rotation and then rotational delay? Because I am working on improving disk scheduling using fuzzy logic and my two parameters are Seek Distance and Rotational Delay.
BTW, the duplicate comment can be removed by rolling you cursor over the top right corner of the comment box until a down arrow appears - click on it & select "Delete".
Assuming you're working on I/O scheduling at the disk controller level, where it's been determined that the request cannot be completely satisfied from cache, there is a third physical component of I/O service time: data transfer time. In some applications disk sector size or data block size may be fixed, but their may be methods of chaining I/O sequential requests together can still produce a variable data transfer time per I/O request. In these cases, data transfer time can exceed seek time, which generally cannot be determined in advance unless the current R/W head position is available.
I'm not aware of any method of determining which sector is currently positioned under the R/W head. Perhaps a controller could reliably maintain that information, but I suspect by the time software could order an I/O queue by optimal rotational delay, the queue would likely require reoptimization...
My background includes performance analysis > 15 years ago, so things change. However, In my past experience, I/O service times are extremely sensitive to queue depth - once an I/O queue forms, optimizing the queue service order may become futile, since the primary determinant of service time tends to be queue position.
Ordering a disk I/O queue by head position should (when performed by the disk device controller) minimizes the mechanical service time for all requests. IMO, there's likely little to be gained by ordering a queue by rotational delay or even data transfer time, unless large data requests can be deferred 'indefinitely' for shorter requests.
I've been discussing queueing from the disk controller level for two reasons. IMO there's little to be gained by any one of several requestors attempting to optimize internal I/O queues, since there is likely another disk controller queue and it's impossible to maintain accurate information about disk head position much less rotational position.
Moreover, in a cached controller (I presume all are), the fastest I/O response time are provided for requests that can be satisfied from the cache - a requestor can't know which requested data is in a controller's cache.
That's about all I can remember - hopefully I haven't misinformed. If so - anyone please correct me. Best wishes!