I'd recommend to use either of the two as a means to extract features from linguistic pre-processing, write it out and use WEKA. Reason being is that both frameworks are rather NLP pipelines than document classification algorithms and there is a lot of (computational and engineering) overhead to swap out classifiers and experimentation might be cumbersome for this reason.
Once you have found which classifier you need, you can still explore how to integrate it with the frameworks. Choice of frameworks depends on the availability of preprocessing for your particular document type (language, domain etc).
I'd recommend to use either of the two as a means to extract features from linguistic pre-processing, write it out and use WEKA. Reason being is that both frameworks are rather NLP pipelines than document classification algorithms and there is a lot of (computational and engineering) overhead to swap out classifiers and experimentation might be cumbersome for this reason.
Once you have found which classifier you need, you can still explore how to integrate it with the frameworks. Choice of frameworks depends on the availability of preprocessing for your particular document type (language, domain etc).
I can only support Chris Biemann, WEKA is a great, open-source, and hassle-free tool for this.
Use WEKA's Experimenter for this, setup your experiment, start it, lean back, and drink a coffee while waiting for the results. It also provides classifier analyses out of the box.