It really depends on how you are planning to measure validity. Traditionally, construct validity makes the strongest argument and can be evaluated using a confirmatory factor analysis (CFA) framework. A good model fit would provide evidence that your participants are thinking about the items in the same way as originally developed. 60 participants, however, is definitely a stretch for a CFA, especially if you have a high number of items in your scales. Your absolute maximum would be 10 items and 2 latent constructs, or 11 items if the scale is uni-dimensional and you would have to have evidence of normal distributions. Please be advised that by many quantitative researchers would still point out that you are under powered so you must not take this limitation lightly. If the scales are 6 items or less, your power should be adequate though.
Alternatively, I recommend computing Cronbach's alpha as a measure of reliability and then test correlations with other relevant variables to see if the measures are appropriately correlated with outcome variables. This would provide evidence of internal consistency and criterion validity. You should have enough power for these.
You're very much under-powered on your first 2 scales; so CFA is probably not your best bet. Since you have the scales at 2 different time points, however, you do have the option of examining the correlations between each measure at time 1 and time 2 as well as correlations with outcome measures (ex. Scale 1 at time 1 and time 2 with outcome at time 2). You can also examine test-retest reliability by computing Cronbach's alpha at time 1 and time 2. In order to make this effective, however, there must be a good amount of time between the 2 administrations of your scales. I'd say 4 weeks or more would be ideal. It won't help much with validity; but it will help you may a case for adequate psychometric properties none the less.