Frist, you can use a tool to extract text of your pdf documents (for example http://itextpdf.com/themes/keyword.php?id=482), but there are many tools in many languages.. Then you have differents text files and you can do what you want.
A more robust but more difficult method consists in using a pdf library as itext of pdfbox and programing exactly what you want to do. But it highly depends on your pdf documents.
I personally would do this on a linux machine using pdftotext (part of the poppler utlities) to convert the PDF's to text and then using something like Perl or Python to count words (and do other steps, like stemming, stopword elimination, etc.).
When I do something similar with research articles, the results are quite mixed depending on how the PDF was created. You will have the best results (regardless of methodology) if the PDF's were created directly by a user (e.g., "Save As... PDF" in Word). Otherwise, you will have some degree of problems. If the PDF is a (not searchable) scan, then the document is stored as images inside the PDF and you will get no text; instead, you will have to use OCR to convert to something you might be able to analyze. If the PDF is "searchable" then the PDF software already did this.
Whenever OCR is involved, there will be a lot of garbage in the file. For example, the software that came with my scanner tends to recognize letters very well but can screw up spacing draatically, either removingspacesfromwords or a d d i n g s p a c e s. (Which is annoying.) It also tends to over-interpret so stray marks end up being turned into stray letters.
WordStat (content analysis and text mining software) will help you count words from PDF documents. For more information on WordStat, click here: http://provalisresearch.com/products/content-analysis-software/
You can download a trial version from here: http://provalisresearch.com/downloads/trial-versions/