A p-value of 0.08 being more than the benchmark of 0.05 indicates non-significance of the test. This means that the null hypothesis cannot be rejected.
in general the p-value states the conditional probability to receive the data you have, given that the null hypothesis is the true state of the world, so given that there is no effect: P (data | null hypothesis).
That is, if in your case the p-value is p=0.08 and a priori you set your alpha-error to α =0.05, you would keep the null hypothesis and reject the alternative one, because there is still an 8% chance that your data actually comes from the population of your null hypothesis. You however are only willing to make the mistake of assuming an effect where there actually is none with 5% of probability (α-error).
Accordingly, if your p-value is smaller than your α-error, you can reject the null hypothesis and accept the alternative hypothesis.
Note, however, that strictly speaking the p-value can only be interpreted binary: Is it smaller or larger than my α-error? If you want to have a measurement for how relevant a significant effect is, you should additionally calculate an effect size.
The replies so far appear to see the world in black and white, rather than in grey scale. The p-value is on a "perfect" grey scale. Statistical significance can be at different levels, not just below or above 5%. If you say it's significant you must add a level of significance to your statement. The value p=0.08 is not significant on 5% level (and therefore also not on lower levels). But when it is between 5% and 10% I suggest you say that there is "an indication" (of an effect). You can also speculate about the likely reasons for the weak "indication" p-value: too small sample sizes? too small or no effects? outliers?
If you have made an F-test, it does not by itself generate meaningful effect sizes. But your package output is likely to yield both estimated effects and (confidence) intervals for the true effects (at least standard errors).