You need a test to control type I error like snk, scheffe or a planned like dunnett against a control... First estimate an experimental error, try with R software...
You could try collapsing some groups into categories, so you will have fewer than 50 groups. With 50 groups you're likely to have a Type I error because there are so many groups and one (or more) will be different from expected just by chance. Perhaps you could try another statistical method other than ANOVA.,
I've had little chance to enjoy RG much since the turn of the year, and it is fortunate that I came upon this thread! Indeed, as usual you are correct. Any number of groups can be compared using a clever new variation of a maximum-accuracy multiple range test. I am in the process of relocating my lab, and hope to be back in the saddle in about one month from now.
This kind of depends on how you are analyzing your data. Your IV may be continuous or discrete, or they may be continuous but coded as discrete. The latter could be something like age where people mark a box 0-10, 11-40, or 41-90 years old (or whatever distribution of categories seems right). If you have 50 univariate tests, then Jerry's response applies. Maybe you have a single DV (say plant biomass) and one IV (say nitrogen fertilization rate) and there are 50 categories of the IV. In this case I would suggest abandoning the categories and treat the IV as a continuous variable. Maybe you have 50 different treatments that are special blends of macro- and micro-nutrients. In this case Tukey HSD would be a commonly used approach, though there are many other options. Tukey's test is familiar, and controls the experimentwise error rate. However, this will not really help if you have 50 independent test. So I have 85 test, and for each test I have 6 categories. Tukey's HSD will control the experimentwise error rate for each test, but does nothing for the manuscriptwise error rate (85 variables).
You can estimate the number of significant outcomes you should expect by chance alone if the null-hypothesis is assumed true. You can then adjust your claims of significance accordingly. You can try reducing alpha to get the manuscriptwise error rate under control. The problem with this approach is that it will be a bit too conservative and therefore inflate the number of significant outcomes that are missed (Type II error).
Check out the Optimal Data Analysis website (ODAJournal.com). There is a new book that just came out on ODA that should help if you decide to try this route.
I have a somewhat different view on this question than the previous answers. But step by step.
(1) The conceptual question, is it meaningful to perform an ANOVA with so many groups? You test the Null hypothesis, that not all groups are equal. This can be done with two groups, with five or with fifty. There is no principled difference between different numbers of groups and certainly no magic limit of the type, if N_groups > x, then ANOVA is wrong. No, it is ok to perform an ANOVA here.
(2) In case the ANOVA gives a significant result on e.g. the 5% level, then you have exactly what you tested. In 5% of the cases where all 50 groups have identically distributed values you reject it nevertheless. That's the logic of null-hypothesis testing. Thus, when the ANOVA is significant, well, it is significant and you can say, the 50 groups are not all the same. However, the value of that is reduced. Here the large number of groups comes in after all. You will like to know which ones are different from the rest. This calls for post-hoc tests. And here you need multiple comparison correction. Bonferroni assumes independent tests, which is not the case, and is therefore overly conservative. Tukey is the way to go. You might argue that LSD, i.e. no correction, is appropriate as the ANOVA is significant. There is something to this argument, but in fully exploratory settings I'd be sceptical. The difference between Tukey and LSD increases with the number of groups. Here the discussion above kicks in.
(3) Pooling groups depends on the type of variable. If it is arbitrarily binned, e.g. like age, you should think about a different test altogether! Use linear regression, spline fitting, or a full blown model. If it is different categories, e.g. like brands of car makers, think about your scientific question. It might allow meaningful grouping or it might not.
(4) Finally, which software to use? This is completely detached from the above. My version of SPSS can manage 99 groups. In case this is not sufficient, calculate all those sum of squares in your favorite programming language and do the ANOVA yourself. As long as you do not have fancy corrections and the like, it is not that difficult or tedious. Have a look at Jarad Niemi wonderful tutorials on you tube.