You will want a test that controls the experimentwise error rate. Which one you use depends a bit on the data and what you want to assume. A common option is Tukey's test. This might be Tukey's HSD test, or Tukey-Kramer test. There are a couple dozen alternatives. If you need a non-parametric (rank based) test then something like REGWQ might be an option (Ryan-Einot-Gabriel-Welsch multiple range test is REGWQ in SAS). Some people suggest something like the Benjamini-Hochberg procedure.
Sometimes it is not necessary to look at all possible pairwise comparisons. If you can focus your question to involve only one or two pairwise comparisons, then a t-test might work fine. However, in general the t-test is not good because it fails to control the experimentwise error rate.
This question has been asked before on RG, though I am not quite sure how to find the other answers. A simple search of "multiple comparisons" turned up nothing. There is a book "Multiple Comparison Procedures" by Larry Toothaker (1993) SAGE University paper #89 as part of their series in Quantitative applications in the social sciences. However, there is a more recent book on multiple comparison procedures .... but the name escapes me.
For a simple pairwise comparisons (treatment versus control) do not use Tukey. Use a multiple comparison procedure ONLY if you have multiple comparisons (a control and two or more treatments). The type of analysis that you do needs to match the kind of data that you have.