First you have to calculate the classification rate of each classifier, then you have several methods to achieve the combination. A simple method is weighted mean. To do that you can use this equation: sum(si * tj) / sum(tj) where si represent the output of each classifier and tj the classification. From a threshold value you will decide the optimal deision. This is the combination of decisions of classifiers for an optimal decision.
This is a complicated question to answer. I suggest you look into the field of ensemble learning, which is concerned with combining individual classifiers to form better decisions.
As previously mentioned, a weighted mean is a simple solution (you can have a classifier instead of the weights). Or a simple agreement method would also be possible.
I would also suggest you to have a look at semi-supervised theory. Co-Training might be an interesting solution to optimize the training of your models of both your classifiers.
Finally you can also have a look at Late Fusion solutions which proposes several solution for merging classifiers output. (This topic is often adressed in mutlimodal contexts). I would suggest you to read a bit about late fusion mechanisms
you may want to have a look at this article for a general overview of fusion
Niaz, U.; Merialdo, B., "Fusion methods for multi-modal indexing of web data," Image Analysis for Multimedia Interactive Services (WIAMIS), 2013 14th International Workshop on , vol., no., pp.1,4, 3-5 July 2013
doi: 10.1109/WIAMIS.2013.6616129
And this one for a more details on specific weights in late fusion
Lai, K.; Liu, D.; Chen, M.; Chang, S., "Learning Sample Specific Weights for Late Fusion," Image Processing, IEEE Transactions on , vol.PP, no.99, pp.1,1
2. There is a field related with rank list combination. I think it can be usd to classifier combinations. For example:
James Allan, Anton Leuski, Russell Swan, Donald Byrd, Information processing and management, evaluating combinations of ranked lists and visualizations of inter-document similarity, 2001
Ensemble Learning.. And simply "VOTE"ing ensemble.. You can use as much as classifiers on top of different "voting schemes".
I think, you should start with sample datasets from UCI web-link. And make some experiments to see the results quantitatively. Then read some articles to absorbe the underlying theory.
WEKA also supports many other ensemble algorithms with a user friendly GUI.