If you you know Java you can write your own code using weka classes to do the experiments and evaluate them. From Weka experimenter under analysis you can carry out t-test to evaluate the results. You may also evaluate the arff result using a scripting language such as perl or python to find out optimal ML, parameters and databases
You may be able to come up with better solutions by using your own judgment and designing and running experiments manually (in batches). Or, apart from the suggestions above, you could write a genetic algorithm.
WEKA in general provides good parameters by defect. Be also careful of giving too much effort to tuning parameters since you can end over fitting your data. It could be more worth at some point comparing multiple algorithms using multiple metrics. In most problems to gain a bit more accuracy will not matter that much as reducing false positives or computation time.
You can use one of the meta-classifier for parameter optimization already built into Weka, for example CVParameterSelection or even GridSerach. Example:
1. set classifier to CVParameterSelection
2. set a base classifier within the settings of CVParameterSelection
3. set CVParameter settings (examples also in link below): parameter_name start_value end_value step_size
More about this topic: http://weka.wikispaces.com/Optimizing+parameters
More generally, the Nelder-Mead optimization algorithm may be useful, as well. Treat the function you're optimizing as the crossvalidation performance of your algorithm trained at different parameter values.