Hi all, i'm trying to implement the SM2RAIN algorithm in python for estimating precipitation in USA based on soil moisture observations, i started by gathering data on county level.

I used " NASA_USDA/HSL/soil_moisture" for the SM observations, basically the SURFACE SOIL MOISTURE (ranges from 0 to 25mm), the temporal resolution is 3 days (the lowest available one)

For the precipitation, i used "IDAHO_EPSCOR/GRIDMET" for daily observation, i resamplead the data to 3 day accumulated values.

I implement the algorithm using this code freely available on github " https://github.com/IRPIhydrology/sm2rain/blob/master/sm2rain/algorithm.py ".

I normalized the surface soil moisture values and got started with on county (320 observation)

I used the pearson's correlation R and RMSE as metrics for evaluating the output of the model.

The calibration process was processed on the whole dataset

PROBLEM:

The issue is that i'm getting a low R value (below 50%), and the simulated precipitation always has the same pattern as the soil moisture.

the simulated precipitation cannot catch the hight values of observed precipitation as you see in the attached file (orange color: observed precipitation/ blue color simulated precipitation [only 30 points in the graph]

Questions:

Did i choose the correct soil moisture variable ?

it is normal that simulated precipitation has the same pattern as the soil moisture ?

Any tips or suggestions are appreciated.

Thanks in advance.

Similar questions and discussions