Honestly, each of F, t, and z (and chi-square) tests are used in a variety of situations. It's kind of asking, When do we use a knife ?. It's a tool that can be applied in a lot of different situations.
Let's omit the F-test (for now), and using a one-sample z or t test for a mean as the context.
z or t = (Xbar - mu | H0) / SEXbar
SEXbar = SD / SQRT(n). If the population SD is known, then you have a "population SE", and you get a z-test. But if the population SD is not known, you use the sample SD as an estimate, and you get a t-test (with df = n-1). In both cases, the necessary normality assumption is that the sampling distribution of Xbar is approximately normal.
Finally, bringing back F, for any t-test, t2 = F with df1 = 1 and df2 = whatever the df were for the t-test. And for any z-test, z2 = Chi2 with df=1. (Sometimes, that Chi2 test is called a Wald test.)
Do you mean the relationship between F and t (i.e., when the numerator degrees of freedom is 1), and between t and z when df is infinite? Or do you just mean as when to use, and then as Sal says they can be used in different situations (and sometimes in the same situation), see intro stats books.