Actually i am applying dependent ensemble which don't have multiple classifier. its only do ensemble a meta classifier like (realadaboost) with other one classifier like J48..
how they both work combined as ensemble tor reduced classification errors
Generally speaking, an Ensemble Methods can be defined as a system that is constructed with a set of single models operating together and their outputs are combined with a combination strategy to generate a single response for a given problem/task. Commonly, the single model can be classifiers, predictors or filters depending on the type of task. Ensemble methods approach is designed to solve classification, predictions, regression or clustering problem. The relational behind using the ensemble approach is based on the bare fact that no individual models can distinctly outperform the other models, and that the ensemble approach may lead to generate better results.
There are two type of Ensemble approaches:
(a) Homogeneous Ensembles: in this type there are two sub-types:
(1) an ensemble that combines one base model with at least two different configurations (SVR with RBFN kernel and SVR with Polynomiale Kernel)
or (2) a combination of one ensemble learning (meta model) such as Bagging, Random Subspace, or Boosting and one base model
(b) while the heterogeneous ensemble term is used to refer to an ensemble that combines at least two different base methods.
However, in order to achieve high level of performance by Ensemble method, there are two conditions that the constituents of ensemble must satisfy: have a good performance and Diversity.
An accurate model is the one that produce a better estimation than random guessing.
Two models are diverse if they make different errors on the same data instance.
Hence, the success of an ensemble resides in reaching a good tradeoff between accuracy and diversity.
In fact, the main issue resides in Diversity, because there is no formal definition of diversity. However, there are some methods to generate diversity in ensemble creation such as training base learning models using different sample of data, feature subset selection, varying the learning parameters of base learning model or using different type of models.
In your case the diversity is generated by using different sample of data: "Each base classifier is trained on data that is weighted based on the performance of the previous classifier".
Note that your ensemble has more than one classifier because at each iteration one classifier is created.