Dear Mukunda, multiple testing correction, such as Bonferroni, is normally not applied to multiple linear regression because you tend to vary and adjust existing models rather than test different things.
In principle, there is no need to correct for multiple testing, for the reason as Peter Samuels said. However, when you use variable or model selection methods, then the p-values are not meaningful anymore. They tend to be too optimistic. Then post selection inference is important
This depends on what you're testing. If you run a model where there are five different parameters that each could support your hypothesis (e.g., if any one of these parameters is significant then you would say that your theory was supported or whatever), then yes, you need to correct. For example, if you predict that there is an effect of IV1 but you are also willing to consider that it might interact with IV2 (e.g., if you predict that males will have lower-pitched voices than females, but maybe only once they are adults [sex is IV1 and age is IV2]), then you could say your prediction was "right" if either IV1 or the IV1*IV2 interaction is significant; so you have two comparisons, and need to correct for that. See https://daniellakens.blogspot.com/2016/02/why-you-dont-need-to-adjust-you-alpha.html for more explanation.