If I would have to answer in a single word, the short answer is NO. But there are plenty of details.
In a reconstructed phase the phase is wrapped in 2*pi. Two flat surfaces with sharp borders with depth giving 2*pi or 4*pi would be indistinguishable. No matter which the unwrapping algorithm (on the object) is. Information is limited. You cannot know how many turns a wheel has turned in 100 m.
The only way is to brake the ambiguity with more information. Say that you use two wheel with different radii. Then for a given combination of position of the wheels, the number of posibilities is greatly reduced. This is the equivalent of using 2 wavelengths. The range without ambiguity depends on the relative difference of lambdas. Using many lambdas you can enormously increase the range. This is equivalent to the 3D digitizers which projects fringes with different periods and have meters of range with a very sub-mm accuracy.
All above refers to a phase map (or a few ones with different lambdas). You could use other information. In the answers by Davood Khodadad he refers to a method using defocusing. This, implictly is using the fact that the holograms exhibit paralax. You have several views of the object (assuming the number of pixels in the hologram are significantly larger than in the imaged part of the object). This extra information could be used by itself for better unwrapping.
Nevertheless I'm unaware of a method using just this information, without needing multi-lambda. This could be a good subject for reseach :-)
By the way, if you have a continuous phase (no regions separated by steps from the rest), try any of the well known unwrapping algorithms, like Flynn (google "flynn unwrap" to find details and even code)
We did something similar with the help of speckle displacements. I think You can guide your unwrapping algorithm with the slopes and gradients that are extracted from the speckle displacements and they are integrated as well. summary of the method for single hologram is like this:
1: keep the original single hologram as a reference H1.
2: Re-focus the hologram to arbitrary distance (This distance should be in the range of longitudinal speckle size) H2.
3: Now correlate H1 and H2 and obtain speckle displacements.
4: Use Equation 7 in the attached paper. Here you have only one wavelength then instead of k/(delta k), use 1.
Now tan (theta) gives you the surface slope and gradients. You can guide your unwrapping algorithm with Tan(theta), or instead of using phase, you can use tan(theta), as they are providing the same information.
To see how we have guide the algorithm with tan(theta), you can take a look to another paper that we published at :
And what if just the single wavelength is available? I was about asking the similar question so I will be happy to hear the solution, an efficient algorithm for a monochromatic light.
We are going to publish a new paper, that I think it will clearly solve the problem. I somehow answer the question before. If you propagate the recorded image and correlate with the original one,he relation is given as A=2MLtan(theta).
A is speckle displacement
M= is lateral magnification
L is propagation distance
Therefore tan(theta) is the surface gradient.
I will come back to you with more details of calculation when we publish the paper.
If I would have to answer in a single word, the short answer is NO. But there are plenty of details.
In a reconstructed phase the phase is wrapped in 2*pi. Two flat surfaces with sharp borders with depth giving 2*pi or 4*pi would be indistinguishable. No matter which the unwrapping algorithm (on the object) is. Information is limited. You cannot know how many turns a wheel has turned in 100 m.
The only way is to brake the ambiguity with more information. Say that you use two wheel with different radii. Then for a given combination of position of the wheels, the number of posibilities is greatly reduced. This is the equivalent of using 2 wavelengths. The range without ambiguity depends on the relative difference of lambdas. Using many lambdas you can enormously increase the range. This is equivalent to the 3D digitizers which projects fringes with different periods and have meters of range with a very sub-mm accuracy.
All above refers to a phase map (or a few ones with different lambdas). You could use other information. In the answers by Davood Khodadad he refers to a method using defocusing. This, implictly is using the fact that the holograms exhibit paralax. You have several views of the object (assuming the number of pixels in the hologram are significantly larger than in the imaged part of the object). This extra information could be used by itself for better unwrapping.
Nevertheless I'm unaware of a method using just this information, without needing multi-lambda. This could be a good subject for reseach :-)
By the way, if you have a continuous phase (no regions separated by steps from the rest), try any of the well known unwrapping algorithms, like Flynn (google "flynn unwrap" to find details and even code)