There is an example in Wooldridge second edition page 445 chap 14 which the F test for a joint test is insignificant while several variables are significant.
The previous answers are well-taken. Another reason a single predictor could be significant while the full model is not is due to the reduction in the degrees of freedom in the error term. That reduced number of degrees of freedom would be in the divisor in computing the error term resulting in a larger error. In fact, every term that is added into the model would reduce the error degrees of freedom. That is one of the reasons that any regression model should be based on a solid theoretical foundation rather than running exploratory models.
When predictors are highly colinear, the model can not decide which of the preditors can be accused for the variance reduction. So although the model can estimate a coefficient for each predictor, many other combinations would work almost equally well. Hence, the estimates are not "precise", there is a lot of uncertainty (why just this combination, why not another?)
For an extreme example you can think of predicting the weight of a person by her shoe size given in inches and by her shoe size in centimeters. Sinse "size in in" is just 2.45 x "size in cm", the two predictors are perfectly correlated. The model would attribute half of the relation between show size and weight to "in" and half to "cm". However, every other relative contribution will work exactly equally well, so there is actually a complete uncertainty about the results.
The previous answers are well-taken. Another reason a single predictor could be significant while the full model is not is due to the reduction in the degrees of freedom in the error term. That reduced number of degrees of freedom would be in the divisor in computing the error term resulting in a larger error. In fact, every term that is added into the model would reduce the error degrees of freedom. That is one of the reasons that any regression model should be based on a solid theoretical foundation rather than running exploratory models.
No, if the problem was multicollinearity, we would have a set of individually non signifficant regressors while jointly signifficant... This is a reverse case.