The planar phase front and the spherical phase front may change the focal point of an ideal micro-lens ( having min aberration). This fact may be enhanced drastically for an incident distorted wavefront.
Strictly speaking, focal point of a lens is defined as the point where plane wave is collected into a point, in the geometrical optics approximation. So, focal point of a lens is a constant. If lens is used for transformation of a wave front different from plane wave, than that wave front can be collected, in general, not to a point, but to an area with nonzero size. The position of minimum size occupied by the light may not coincide with the focal plane.
Strictly speaking, focal point of a lens is defined as the point where plane wave is collected into a point, in the geometrical optics approximation. So, focal point of a lens is a constant. If lens is used for transformation of a wave front different from plane wave, than that wave front can be collected, in general, not to a point, but to an area with nonzero size. The position of minimum size occupied by the light may not coincide with the focal plane.
I think different symmetrical wavefront can have the same focal point position if they are exposed to a convex lens, parallel to its main axis. However the difference can be observed in the area of the focal area. It means a planar wavefront can have a bigger focal area compare to a converging wavefront exposed to a system with fixed lens and screen position.
Imagine a test situation. First, we expose a converging wavefront to a convex lens and change our screen position on main lens axial to find the focal point. If we reduce the radius of wavefront, we will have the focal area fixed on its previous position, but with a bigger radius.
The position of focal area depends on lens shape and wavefront shape!
A Shack–Hartmann (or Hartmann–Shack) wavefront sensor (SHWFS) is an optical instrument used to characterize an imaging system. It is a wavefront sensor commonly used in adaptive optics systems. It consists of an array of lenses (called lenslets) of the same focal length. Each is focused onto a photon sensor (typically a CCD array or quad-cell). The local tilt of the wavefront across each lens can then be calculated from the position of the focal spot on the sensor. Any phase aberration can be approximated to a set of discrete tilts. By sampling an array of lenslets all of these tilts can be measured and the whole wavefront approximated.(http://en.wikipedia.org/wiki/Shack%E2%80%93Hartmann_wavefront_sensor)
Focal distance and focal plane are ideal paraxial concepts that only work if the object distance is far enough and the incidence angle on the lens is typically less than 12º. Otherwise you will have optical aberrations. The post typical is the spherical aberration. It depends on the lens aperture and on the object position.
In HSFS you hace a set of lensless. Since the detector area is limited and area-crossing should not happen we are in almost paraxial conditions. However, in the wavefront is too much tilted you will exit from the paraxial zone and suffer from aberrations. Again, the first one will be spherical and if tilting is string you will also suffer from astigmatism and field curvature.
Therefore you may reformulate your problem as search of the best-image point.
The question is so simple, but the answer will be quite complicated. To my personal point of view, it depends on the properties of the beam which will be focused and geometry setup. All the case describe above I think is about the planar wavefront close to optical axis from theoretical consideration. But actually, which kind of the wavefront should we expect from the real beam? Is it really ideal planar?
The shape of the phase front after passing through the lens will change due to the non-uniform thickness of the lens. This arises from the "Fermat principle" which states that ((the light always takes the path that makes the transit time a minimum)). For example planar wave front would be curved toward the focal point and an approximate expression for the corresponding field just to the right of the lens is shown in image below;
Actually, I think it is not so simple. The transform of the wavefront will be more complicate. The thing you got, you got from the pattern. Think more about the principle of the patterns which come from the interference.
The focal point of an ideal lens vary with different phase fronts from the incident beam.
When we say focal length of a lens it means that when a parallel wavefront is incident on a lens the rays come to focus at a distance of the focal length from the lens. Using paraxial approximation we have the relation 1/f=1/u+1/v where u and v are the object and image distances. From this equation it is clear that when the location of the object plane varies so does the location of image plane. The magnification of the system is M = v/u. This controls the size of the spot at the image plane.
One additional point is to understand that the above formula is true only for paraxial approximation where u and v are larger. In diffractive optics we often work with smaller f numbers. In such cases even the for an ideal case one cannot use a lens (Fresnel zone lens) designed for focusing parallel wavefront for focusing a diverging wavefront as it introduces aberrations.
The aberration will definitely increase/decrease depending on the quality of the incident wavefront.
I guest that Parviz is thinking in a Hartmann-Shack wavefront sensor and the influence of the impingind local wavefront over each microlens on the corresponding spot at the detection plane. If I am right with this, I have to say that, yes, the spots at the detection plane changes with the impinging wavefront. Thie deformation depends highly on the Numerical aperture of the microlens.
Yes Justo. You are right. Can you quantify this problem or give us proper references how to correlate incident phase front with final location of focal point?
I don't know exactly how to answer your question¡¡ There are a lot of variables involved. If microlens aperture is small you can supposse that the wavefront is close to be plane and no spot deformation will be observed. Otherwise if the local wavefront cannot be considered to be "plane" the movement of the spot axially and transversally will depend on the local tip/tilt and vergence of the wavefront.
However, as I don't know exactly why are you aware of this displacement, I will say that the axial displacement is irrelevant for estimating the local wavefront slope.
The most general treatment is to consider a Fresnel transformation, say, you are not exactly at the geometrical focal point. We assume that unextended ideal point sources are not available in classical sources, therefore, an exact plane wavefront is not applicable for a general case out of the geometrical otpics approximation. Defocusing con be treated as an monochromatic aberration.
" The movement of the spot axially and transversally will depend on the local tip/tilt and vergence of the wavefront."
Regarding a perfect plane wavefront, How much displacement do you estimate for a distorted wavefront ? few micrometers or more? How do you estimate it theoretically?
I am sorry to insist but you have to drop a geometrical optics description. Go to the lens maker formula and include an arbitrary increment in the focal distance, and then perform the Fresnel transformation, as in the usual case. Obviously, the influence of this increment will turn into a new phase term. Check it, is not that tedious and very illustrative.
I found a simple and free software on Internet, you can find and download it easily, called"Pintar Interactive Physics Virtualab Optics". It uses geometrical optics to show us the output of a lens against different kinds of rays and accordingly different wavefronts.
I tested one glass lens in 3 different situations:
1) In front of a plane wavefront
2) In front of a tilted plane wavefront
3) In front of a spherical wavefront
I found the concentration area of these rays on horizontal axis on almost 6.8 , 6.5 and 10.2 respectively as you can see in the picture.
Therefore it can be seen that different wavefronts have different concentration areas.
About the SH wavefront senor, I think it is true that we have the concentration area of the rays of different wavefronts in different positions- sometimes even not on its CCD - but the point is that there is always an intense area on the CCD as a clue for finding the partial tilt or slope of the wavefront.
The analysis of these data is the responsibility of the software of SHWFS!
Fermat's principle and the Lagrange invariant are at play in an optical system. Optical systems are not deterministic for rotations about the optical axis.
Focusing is an interference effect. An ideal lens in this picture is a phase mask that changes the phases of an incoming plane wave such that constructive interference occurs in the focal point. It is therefore clear that wavefront distortions - which are equal to phase shifts - will change the interference pattern and therefore the focus - more precisely the 3D field distribution in the focal area. If you have phase front distortions before the lens you can correct them using a spatial light modulator. Many experiments have been performed to demonstrate this.
Hi, Bert, I agree that many experiments showed that there is some corrections for the distortions on the near field's pattern. But do you measure the pattern's change at different place? Lagrange invariant still take effect whatever you use anything. Even though you see the on-axis spot, you just change the chirp introduced by the phase shift from spatial domain to temporary domain. But just from one pattern which we get, we have no chance to see the change of the temporary domain because of the temporary statistic effect. That is first thinking by Georg Friedrich Bernhard Riemann, then inherited by Henry Poincare、David Hilbert, finally introduced by Albert Einstein to all the physicists. The things what they think about are that not only the world is not flat but round, but also the motion is not translational but twisted. The more twisted it is, the more far away from the Newton's maths can predict. But what I mentioned above do not mean that I fully agree what Albert Einstein persist. His ideas are fully appreciated, but the theories looks not as magnificent as the respect they win.
Take a look at the Bessel Functions; integer, the first, and the second kind. They address the positive and negative frequency integral. The definite integral J1 converges in approximately three terms. The differential J1-J2 there after remains constant.
Hi, David, good answer to engineer, but maybe not for physicist. In the beginning of this research, people use HG or LG function to understand the mode change. But with the advance of the high "power" laser, people recognize that maybe Bessel function or even higher order will be better to describe it. For me, the statement is quite similar as the period when Max Planck merge two kind of black body radiation equations with his constant. What's the secret behind the Bessel function? How it comes from?
The Bessel function appears in Linear Systems theory as applied to an optical system, written as 2(J1(x)/x)^2) for the intensity distribution at one-effective focal length. The linear intensity is pi/(4 lambda)/ focal ratio)^2. There are an infinite number of "imaginary" (in fact real) poles along an optical axis geometrical source point to image point. What is often misunderstood is there is zero rotation information about each coordinate axis x and y where z is the singular vector known as the optical axis.
Ideal lens focus a beam to a gaussian having a (half) width of m times the wavelength of the beam. m is the number of transverse modes of the beam. A collimated beam(up to diffraction limit) is considered as a single transverse mode beam that can be focused to a spot of one wavelength. The number of transverse mode is related to the spatial coherence. I recommend my book "The physics of moire metrology".
You right. However the phase front distortions is reversible and can be corrected.
You can calculate the effect of the phase front distortion at the focal area from the Fourier transform of the phase distortion multiplied by the Gaussian.
In practice, Shack-Hartmann is a typical instrument for wavefront sensing (WFS) of an incident beam. Then, it can be corrected using the adptive mirrors ( set of maovable mirrors) based on the feedback to actuators . Can you suggest any other spatial modulator?
you can find information concerning wavefront correction using SLM in the cite
http://holoeye.com, for example.
I used LC-R1080 Pluto with a good results, but I think "LETO Phase Only Spatial Light Modulator (Reflective)" will be better then Pluto for your goals.
You can use just one SLM to built in the Hartmann-Shack wavefront sensor and correct the aberrated wavefront. You can see some of my works as an example of realization.
The question that you asked is about the basis of the Hartmann and Shack-Hartmann Wavefront Sensor.
The Hartmann wavefront sensor consists of an array of apertures mounted a distance from a chargecoupled device (CCD).
Figure 1, shows a schematic of the operation of the wavefront sensor. Light travels to the sensor along the z-axis. Each of the apertures acts like an optical lever, displacing the diffracted spot proportional to the average phase tilt over the aperture. The wavefront sensor measures the tilt over each aperture by comparing the measured positions of the diffracted spots to the positions of the diffracted spots for a reference input beam. The tilt measurements are then converted into a replica of the wavefront by performing a form of integration called wavefront reconstruction.
Shack-Hartmann wavefront sensor (SHWS) was designed to measure both intensity distribution and phase distortion of optical fields in real time and high accuracy. It can be widely used not only in measuring, diagnostic, but also in adaptive optical systems to compensate for phase distortions. Various parameters such as peak-to-valley, root-mean square, Zernike coefficients, beam quality (M2) could be calculated with the help of such a sensor.
Hartmann vs Shack-Hartmann Wavefront Sensor:
SHWFS :
o Easier to calibrate
o Much higher precision
o Higher sensitivity
HWFS :
o Lower in price
o Easier to fabricate
o No chromatic aberrations (wavelength independent)
Any adaptive optic system is made up of three main parts:
• Deformable Mirror
• Wavefront Sensor
• Closed-Loop Control System
I brought some points about Wavefront Sensors. Deformable Mirrors are provided in different types and by different companies for different applications. Choosing the right Deformable Mirrors depends strongly on its application. Here I brought some points about SLMs(Spatial Light Modulators) as one kind of Deformable Mirrors.
What is an SLM?
A spatial light modulator (SLM) is a transmissive or reflective device that’s used to spatially modulate the amplitude and phase of an optical wavefront in two dimensions.
The Addressing Mode
The addressing mode refers to the type of input signal that controls the optical properties of the SLM. It contains information regarding how the incident light beam should be modified.
1. Optically-addressed – one light beam (the optical control beam) is used to change a variable associated with another light beam (the incident beam); the optical control beam is often called the “write beam” and the incident beam is the “read beam”
2. Electrically-addressed – an electric signal is used to change a variable associated with the incident light beam; this will often be computer generated.
Optically-Addressed PAN LCOS
One example of a reflective SLM is the optically addressed PAN (parallel aligned nematic) LCOS (liquid crystal on silicon). The liquid crystal molecules are aligned parallel between two alignment layers (typically polyvinyl alcohol). When an electric field is applied, the crystals tilt in the transmitting direction of the light beam to align themselves with the field. Due to the liquid crystal’s birefringence, the light beam undergoes a phase modulation based on the relative orientation between the LC molecules and electric field.
The optical write-beam will form a certain intensity pattern on the silicon layer, based on the type of optical wavefront that is desired. In the areas where the light is more intense, the silicon’s resistivity decreases, which allows more voltage to pass through to the LC cell. This will increase the strength of the electric field, causing a greater tilt in the LC molecules, which in turn means a larger phase modulation of the read-beam.
An example of a transmissive SLM is the electrically addressed TN (twisted nematic) LCD (liquid crystal display). In these LC cells, the molecules are aligned between two alignment layers in a helical twist, with the first and last molecule perpendicular to each other. Each cell (i.e. pixel) can experience an electric field across it due to a video signal from the computer. This again causes the LC molecules to align themselves in the direction of the field. The relative orientation of the molecules and their birefringence property cause a phase modulation, however this time we can also perform amplitude modulation by including two polarizers in the setup.
When there’s no electric field, the light that’s incident on the cell will initially have a polarization axis parallel to the cell’s entrance face. The helical alignment of the LC molecules causes the lights polarization to rotate as it propagates through; the polarization axis of the output beam is therefore perpendicular to the original direction. A polarizer at the exit face that’s oriented parallel to the original direction blocks light from being transmitted. When there is an electric field, the incident light will not undergo a polarization rotation, therefore light will be able to propagate through the polarizer at the exit face.
MEMS Deformable Mirrors vs. Liquid Crystal-Based Devices:
Before Deformable Mirrors became popular in the Adaptive Optics industry, consumers would generally turn to liquid crystal-based device (LCOS) spatial light modulators to confront their challenges. Here at BMC, we regularly receive questions on how all deformable mirrors, in addition to our MicroElectroMechanical (MEMS) deformable mirrors, compare to LCOS devices. Below I have touched upon some of the top differences between the two devices that I believe should play an important factor in one’s decision to purchase a wavefront shaping device.
1) LCOS devices are only available in a segmented architecture, where MEMS DMs offer both continuous and segmented styles in various styles and options. Although both layouts have their own advantages, most researchers favor the continuous model. Due to discontinuities between the actuators, it prevents any sharp edges within the image, making it well suited for imaging applications. Claire Max at UC Santa Cruz has explained and presented calculations on how you can achieve higher level of correction capability with a continuous mirror. Check out slide 47, which goes over her calculations here.
2) With MEMS DMs, we are able to offer strokes up to 5.5um (1.5um, 3.5um and 5.5um available), while LCOS SLMs are generally limited to only a stroke of 2PI in the visible region. This can be a major inconvenience for certain applications with higher amplitude aberrations.
3) The response time of our devices have always been much faster than any liquid crystal device on the market, while recent updates to our product line achieve even FASTER rates than before. Our devices can operate up to 60 kHz with our new high speed Kilo-S Driver or our Low-Latency Driver, whereas LCOS devices are limited to only a few hundred Hertz at best.
4) For the most part, LCOS devices are transmission based, causing light to be absorbed by the medium and resulting in lost light. There have been reflective devices introduced recently, however, they tend to scatter large amounts of light due to the small segment sizes. With a MEMS device, our segmented mirrors are over 98% reflective and our continuous mirrors are greater than 99%. Of course, this is the case only with the appropriate coating for the wavelength at which you are operating.
The Hartmann is a plane with a 2D array of orifices while Shack-Hartmann represents an array of microlens. What is the difference between Hartmann and Shack Hartmann to determine any distortion of the phase front?
The Hartmann and Shack-Hartmann Wavefront sensors have the same operation and give almost the same pattern to be analyzed to construct the wavefront shape, of course appropriate software and algorithm are needed. Beside the wavefront shape, the output pattern of these sensors can be used to exploit Zernike Coefficients for extra analysis.
To briefly talk about the differences between Hartmann WFS and Shack-Hartmann WFS, these points are brilliant:
hack-Hartmann WFS Advantages:
o Easier to calibrate
o Much higher precision
o Higher sensitivity
Hartmann WFS Advantages:
o Lower in price
o Easier to fabricate
o No chromatic aberrations (wavelength independent)
Any distortion of a wavefront can be shown by a factor of Zernike Coefficient, therefore if we can reconstruct the wavefront by any tool and analyze it to calculate the Zernike Coefficients, any distortion in wavefront can be exactly determined.
Best Regards
About Zernike Coefficient:
The functions are a basis defined over the circular support area, typically the pupil planes in classical optical imaging at visible and infrared wavelengths through systems of lenses and mirrors of finite diameter. Their advantages are the simple analytical properties inherited from the simplicity of the radial functions and the factorization in radial and azimuthal functions; this leads, for example, to closed form expressions of the two-dimensional Fourier transform in terms of Bessel functions.