Chirped pulse amplification is a genius idea to improve output power while avoiding significant nonlinearity distortion. But why always two-stage amplifiers are needed, why not only one stage?
My understanding is following, but I don't know whether it's bullshit.
Assuming gain of an amplifier is 20 dB, then if input signal is 1 mW, output is 100 mW. If input is 100 mW, outptu is 1 W. From 1 mW to 100 mW, this is easy to be realized by a low-power pump. Then using a high-power pump for main amplifier, power can be boosted to 1 W. If only using one stage amplification, 30 dB amplifier is needed. Building a 30 dB amplifier is not as cost-efficient as building two 20 dB amplifier and 30 dB amplifier may cause severer heat generation.
It is usual to generate chirp at low power, because components to do this at high power would be difficult to make. You may want to generate chirp at 10 or 100 mW, where losses don't make things very hot, and then amplify. The chirp generation may be quite lossy, so may need amplifiers before and after.
In any case, high power amplifiers often have low gain and so need a pre-amplifier to deal with most of the gain, with the high power amplifier just adding the required power, 10 dB up to 10 W or 100 W, for example.
I don't know why. I expect it it because the components are optimised for power handling and efficiency and not for gain, because power handling and efficiency are usually the most important things for a power amplifier. It needs to not get too hot and it shouldn't waste power. Efficiency is very important. The difference between 90% and 80% efficiency, for instance, is twice as much power wasted, and also, that power turns into heat that has to be controlled and removed, which adds weight and volume (fans and heat sinks perhaps) and usually more power requirements (so less efficiency) to the total amplifier box.