Every paper I read about statistical modelling of video data frame size, packet level, investigates each frame type separately. That leads to assigning different statistical models for each frame type, I, P & B of the same data sample file. However, video encoding, specifically H.264, relates the three frames in a specific location pattern. Isn't that a reason for sharing sufficient statistical parameters that entitles a common statistical distribution?