I conducted a regression analysis, the f test was insignificant because the Standard deviation of the independent variable is far lower than that of the dependent variable. However, the p value of the analysis is
I assume we are talking about a simple regression, i.e. there is one single predictor in the model, and the "significant p-value" is for the coefficient of this predictor (otherwise, the interpretation of the p-value taken out of a more complex multiple reegression model would not be very sensible).
In such a simple regression, the squared t-value for the coefficient is identical to the F-value of the overall model. If there are n data points observed, the t-value is compared to a t-distribution with n-2 d.f. (tn-2), and the F-value is compared to the F-distribution with 1 and n-2 d.f (F1,n-2).
Formally: if t ~ tν the t² ~ F1,ν
If the regression k predictors, the model F is compared to Fk,ν. For k > 1, this is not identical to t².
Again, the question of Mumin Olatunji Oladipo makes sense only if the mentioned p refers to the sole predictor in the model. If there are other predictors, the meaning of this p may not at all be clear (it depends on the precise structure of the model and the theory), and simply comparing a model-p to a predictor-p is nonsense (it's comparing apples and peaches, so to say).
It is a simple regression and also the single variable is continuous (as I remember DF is 1 for 2 groups and for continuous variables). Therefore P(F) = P(t) here.