Is there any effort in making this happen? If not, why? Can't you just automate the protocol, and eliminate months, years of non-sense and non-intellectual effort in a matter of days, and get to more meaningful challenges in protein discovery?
There are already several places doing this. I have used the facilities available through the Hauptman-Wooodward Inst. (http://www.hwi.buffalo.edu/faculty_research/crystallization.html). They run 1536 different screens against your sample and provide images at regular intervals. I have no idea how many sample they run in parallel.
There are two different sets of screens they provide:
Standard Screen. Luft, J. R., Collins, R. J., Fehrman, N. A., Lauricella, A. M., Veatch, C. K. & DeTitta, G. T. (2003). A deliberate approach to screening for initial crystallization conditions of biological macromolecules. J. Struct. Biol. 142, 170-179. [PubMed ID: 12718929]
Membrane Screen. Koszelak-Rosenblum, M., Krol, A., Mozumdar, N., Wunsch, K., Ferin, A., Cook, E., Veatch, C. K., Nagel, R., Luft, J. R., DeTitta, G. T & Malkowski, M. G. (2009). Determination and application of empirically derived detergent phase boundaries to effectively crystallize membrane proteins. Protein Science 18, 1828-1839. [Pub Med ID: 19554626]
Crystallization is the second bottleneck in getting crystals. Development/application of massively parallel methods might be of more use for the cloning and expression bits.
As for automatizing of protein crystallization including massive parallel approaches- try "structural genomics" in your favorite search engine and follow the links and/or have a look at http://www.psb-grenoble.eu/spip.php?rubrique16 Progress seems to be significant but slower than hoped for.
Two problems. One is image processing - discriminating hits from non-hits seems to resist automation. The other is combinatorial explosion: too many variables with significant impact on the phase diagram. It is impossible to screen the whole of "chemical space".
Experience and luck seem to help for both problems, seeding helps for the second - all of those don't go well with automation. Also, remember that nucleation is stochastic (sort of), so small drops aren't necessarily the way to go. In the end someone has to produce lots of protein, which is the first bottleneck I was talking about.
While scanning thousands instead of tens or hundreds of conditions like is usually done CAN help, in my experience the quality of the protein preparation is more often the key to getting crystals or not. "Better", i.e. more crystallisable protein may be obtained by improved purification, higher concentration, a critical stability-enhancing additive, a small variation in protein sequence (small deletions or point mutations), or screening the same protein from several organisms. Deglycosylation may be important, and many other variable that are not strictly crystallisation variables.
So massive parallel screen does not always solve the problem of not getting crystals.
Unfortunately, I sense a displaced expectation. The veterans, Jeff and Mark explained to you some of the "on the table" aspects of this. However, they were gentle not to get to the "naive" portion of your question. In order to use massively parallel methods you have to have some definable probability of success. Many more sequences are not crystallizable than crystallizable, therefore this would be a recipe for a massive failure. Structural genomics centers, that use these methods, spent a significant effort to design the experiment in such a manner to improve the probability of success. Moreover, at least in the beginning they had significant problems with repeatability of the experiments. Technology is rarely solution to the science questions.
Mark summed up his response as: "massive parallel screen does not always solve the problem of not getting crystals", I would be much more radical and state that it rarely does so. Besides, every step of a science process can pose a significant intellectual challenge, including crystallization.