Why can't the infra-red beam be made to fall on the sample directly and then allow a measure of molecular absorption and transmission by the detector in IR spectroscopy? Why use an Interferometer?
This is actually how it all startet. However, compared to the UV-Vis-NIR spectral region, light sources were comparably weak and detectors less sensitive. Therefore measuring times were long and the Signal-to-Noise ratio comparably poor. FT-IR does not need dispersive elements - therefore you get more light and the SNR is largely improved. Furthermore, you measure the whole wavenumber range at the same time which again improves the SNR. In addition, using a HeNe-Laser as a reference you have a much higher precicion concerning wavenumbers...
Please tell me if I understood it right. Since the light sources were comparably weak for generating IR beams and detectors less sensitive too, that is why an interferometer was installed so that the intensity of the incoming beam (which was initially weak) of IR can be played with to get better results from then used detectors?
The interferometer replaces the dispersive element. As you know through a dispersive element you are able to select a certain range of frequencies/wavelength thereby reducing the intensity. By using a Michelson this is not necessary - you shine onto your sample all frequencies at the same time...
The frequency of IR radiation is very high (even not as high as UV). In real-life situation, there is no such a transducer that can directly pick up the time-domain IR signal at such high frequency. Imagine that you need six points to define a sinusoidal function, but the response of the detector is too long to collect sufficient points. The purpose of interferometer is to modulate the IR signal into a measuable signal, which is interferogram. The latter carries the frequency information that can be interpreted after FT.