The density currently being trained on all the reference models I've been looking at have a density range of 0.6-1.6 g/cm3 however were looking to identify values in the range of 19.3 g/cm3. Were currently using synthetic prism training while we georeferenced and enhance our real-world dataset. We're also having an issue with the sampling interval for the synthetic data as the reference source uses an interval of 50m and we're using a 3-5m spatial resolution. Would the wavenumber-domain iterative approach or the CNN approach be better for apparent density mapping of an undulant layer in terms of the scaling and apparent density adjustment? Currently using the CNN methodology and it looks to be more efficient in terms of delivering results but want the best possible approach. We're looking to discriminate and identify for these higher values and just want to validate that our identification system using classification filtering would be effective in meeting the goal of identifying and isolating the higher density values in the subsurface?
Wavenumber-domain iterative approach: https://www.researchgate.net/publication/328079462_A_Wavenumber-domain_Iterative_Approa[…]_Undulant_Layer_and_Its_Application_in_Central_South_China
CNN Approach:
https://environmental-fns8166.slack.com/files/U0581MH6GSX/F05KAV909HQ/model_paper.pdf