I am using Laplace Approximation to approach a distribution. To do this, I find the minimum of a function, which will be the mean of the target distribution. And the inverse of the Hessian Matrix will be the covariance function of the target distribution. Because the function I use is very complex, and it is difficult to get the theoretical Hessian Matrix. So, I use a numerical method to approximate it (it is essentially to calculate the gradient of the gradient), with a stepsize of 1e-5 during calculation.
The result I get is a little funny that the Hessian matrix has very high magnitude, which resulted in a distribution with almost no tail. I then tried to approximate the Hessian with larger stepsizes (ranging from 1e-2 to 2e-5, which are considered to be a rather small stepsize for my case). What I get is that the Hessian matrix varies very significantly (the largest is six to seven orders larger than the smallest) with the change of stepsize.
I think the reason could be that the function itself is too much sensitive to the small changes in the parameters (especially one particular parameter). If I change the value of this parameter (with very small increment), you can see the function value fluctuate at very high magnitude. But if I change the value of this parameter with relatively larger increment, you can see the function becomes much smoother (which I think should be the trend I have expected).
In this case, what should I do to get the Hessian matrix reflected by the second figure, and avoid the first one?