Typically, the accuracy of satellite-based rainfall products is assessed using categorical and continuous statistical validation techniques. More or less these statistical techniques are quite standardized. Researchers may compare the results of various studies through common validation indices. However, there exists a major concern over datasets preparation for validation purpose. Data preparation is somewhat an unnarrated or vaguely described section in most of the research methodologies. If different researchers use different steps to prepare the same dataset for validation, then the question can be raised on their results comparison in spite of using the same validation indices. For instance, TMPA precipitation product contains hourly rain rates at every 3-hour time interval. Now, these rain rates can be compared with in-situ rainfall in two ways, i.e., comparing instantaneous rainfall rate at a specific time slot or relating 3-hourly averaged rainfall rates of in-situ data with TMPA dataset. In my perception, both techniques should provide different results, and such results are not comparable. This is just an example; some other steps are also involved in data preparation like how to take precipitation averages, how to calculate correlation coefficients (should it be calculated for rainy days only or false alarms and misses should also be included) and many more. In context to above all, I think there is a dire need to standardize processes for datasets preparation before validation.

Note: I am quite new and currently learning validation processes. If standardized processes for dataset preparation already exist then kindly guide me for those steps.

Similar questions and discussions