Yes. If you look at the research done into Ensemble Methods and Meta-Learning, you'll see that the reason that they employed those methods was exactly to increase accuracy.
The wiki page has a suitable introduction:
http://en.wikipedia.org/wiki/Ensemble_learning
And the two 1993 papers from Chan and Stolfo have a good introduction to the general techniques. "Experiments on Multistrategy Learning by Meta-Learning" and "Toward Parallel and Distributed Learning by Meta-Learning"
The application of ensemble-based system is mostly benecial because the weaknesses of one method could be compensated by the others and therefore the result is expected to be more reliable. As Lionel mention, the methods should not have the same decision boundaries (=be asynchronous). One suggestion how to meet this condition is to divide training dataset into smaller subsets when each of them is used for learning of dierent classifier. There is a comprehensive research behind this topic. I would refer you to one great summary article and one book devoted stricly to ensemble systems (for better understanding of underlying techniques).
Polikar, R. (2006) Ensemble based systems in decision making. IEEE Circuits and
It depends on the training and testing data-set. If the nature of data-set is such that it is difficult for one method (technique) to model the entire data-set then it is better to divide the data-set into multiple sub-parts and apply different methods (techniques) to evolve model for each sub-part. The final output (result) will be decided based on the outputs produced by different models. The overall goal is to improve the accuracy of the model (classifier). Lionel correctly mentioned that the methods should be asynchronous.
The reason that ensemble-learning, as mentioned by others, often works well is because it helps solve the bias-variance dilemma (https://en.wikipedia.org/wiki/Bias%E2%80%93variance_dilemma).
Most classifiers can be categorized as low variance with high bias, or low bias with high variance. For example, decision trees, neural nets or KNN are low-bias/high-variance classifiers, whereas a simple linear classifier is high-bias/low-variance.
Depending on the situation, you will prefer a low-bias or a low-variance classifier. However, in a perfect world, a classifier would be available that is both low-bias and low-variance. This is exactly what ensemble classifiers try to achieve.
Let's say we use a decision tree classifier, which is known be low-bias/high-variance. We could decrease the variance by training multiple decision trees on resampled versions of the training data, and then use the average prediction as the final classification. However, this in turn increases the bias (because all decision trees are trained on roughly the same data). To reduce the bias again, we want to make sure that the decision trees are as uncorrelated as possible. This can be done by randomly choosing the features that are used by each decision tree.
In this example, the result is a low-bias, low-variance classifier. This type of ensemble classifier is known as a 'random forest' and is based on 'bagging' (bootstrap aggregating). Another well known and related type of ensemble classifier can be created by 'boosting' .
So as an answer to your question: Combining different techniques can indeed increase performance. However, it is important to combine these techniques wisely. (e.g. combining two high-variance or two high-bias classifiers are unlikely to outperform a single classifier)