The basic reason is that an exact test is generally performed using complete enumeration of all possible tables given the row and column margins. Obviously if you have a large N there will be many, many possible tables, so the algorithm has to run for a very long time to calculate the probability of all the possible tables, and then add these probabilities for all tables with a larger Pearson X^2 than your table.
There is an alternative. Instead of complete enumeration, do a Monte Carlo analysis.
Some details about your problem can be found in a paper by Verbeek and Kroonenberg (1985). A survey of algorithms for exact distributions of test statistics in r x c contingency tables with fixed margins. Computational Statistics & Data Analysis, 3, 159-185.
If you are willing to send me your table I could give you more precise information and even give you the exact p-value and how many tables there are for your problem.
I have encountered the same issue while using SAS for 8*8 table. It is not recommended to go for the exact test if you have a larger than 2*2 table. The alternative, as Pieter mentioned, is to do Monte Carlo analysis, where you specify the number of iterations. I would say start with 2000 to see the computational time and have an idea about the results BUT for research applications, definitely go higher (10000 or more).
I am sorry to disagree. The exact test can in principle be carried out on any contingency table given it has fixed margins. Unfortunately, the computing time goes up very fast with larger tables and larger N. See Verbeek and my article mentioned earlier.
Fisher just did not consider larger tables. It would have been a terrible job at his time and age. I am not sure who first worked on exact tests for larger tables but one could eaily find out, I guess.