I am setting up an experiment to estimate the accuracy of different interpolation algorithms for generating a spatially continuous rainfall data (gird) for a certain area. The data density (number of points versus the area) and the spatial arrangement of the data (i.e., random versus grid-based ), will vary for each run (attempt)

The objective is to understand how each algorithm performs under varying data density and spatial configuration.

Typically, different studies have done this using station data of varying data density and spatial configuration. In the current context, there are limited stations (just about 2)and the intent is to execute this experiment using data sampled (extracted) from existing regional rainfall grid, but varying the data density as well as the spatial configuration.

Note that I cannot generate random values because the kriging is to be implemented as a (multivariate stuff ) using some rainfall covariates....random values will mess up the relevance of such covariates.

I did a rapid test and found that, despite a wide difference in the density and configuration, there was no significant difference in the accuracy result, based on cross validation results. What's going on? It's not intuitive to me!!

Please, can you identify something potentially not correct with this design? Theoretically, is there anything about dependency that may affect the result negatively? Generally, what may not be fine with the design? how can we explain this in your view??

Thanks for your thoughts.

Sorry for many text..

More Elijah A Njoku's questions See All
Similar questions and discussions