As i read in papers there is bunch of method like BMA (Bayesian Average Model) or sth like equal and unequal weighting; Which is best ? Is there Any new way ?
there is not a unique answer or the best practice. You may find lot of publications concerning the statistical post-processing of ensemble forecast. Joking with words concerning new proposed methodologies, we can say that the last method may be the best but the best may not be the last one. More seriously, you know that dynamical model predictions manifest systematic patterns of forecast error. Ensemble prediction systems have got the same deficiencies and, as a result, frequently produce biased central tendencies and under-dispersive ensemble spread. To this end, statistical post-processing exploits correlations between forecast variables and contemporary observations to mitigate these defects and improve the reliability of deterministic and probabilistic forecast estimates. It seeks unbiased single-valued forecast estimates, for example the influential regression method of model output statistics and calibrated predictive distributions that have unbiased means and optimized sharpness subject to statistical consistency with the observations.
A direct bayesian extension of the method of ensemble model output statics (Richter 2012) can be applicable in the short range forecast. A hierarchical Bayesian probability model can be used to invert the canonical probability statement and stochastically parameterize observable forecast variables with unobservable model parameters within a multivariate multiple linear regression framework. In this way, a priori forecast beliefs are conditioned on a time series of previous model forecasts (predictors) and their corresponding observations (predictands) to train a hierarchical multivariate Bayesian predictive model.
More generally, in an ensemble prediction system you have got multiple weather forecasts by iterating forward random perturbations of a best estimate of initial conditions (see Toth et al. 2001). Weather forecasts obtained using an ensemble can be synthesised to give one single value. Such a value may derive from an average (ensemble mean) or from a selection procedure. A number of ways of treating ensemble forecasts have been suggested. The Bayesian Model Averaging (BMA) uses forecasts from deterministic models as inputs for a statistical model and can be extended to dynamical models. Among the forecasts, there is the best one when the set of model has the property that each model is different and identifiable. BMA quantifies uncertainty about which model is the “best”. Each forecast can be corrected for bias, for example through observed values and computing a predictive distribution function of observation conditional on the “best” forecast in the ensemble. The probability of occurrence of the best forecast is based on the forecast performance of each model in the ensemble in a training set of forecast and observations. These probabilities are the weights obtained for each forecast.
Another way is the Best Member Dressing method. Given an ensemble forecast, it is unlikely that any of the ensemble members will equal the observation. The lack of correspondence can be taken into account by assigning an error distribution to each ensemble member. In order to do so, one needs to know the appropriate degree of uncertainty. You can associate uncertainty with the ensemble best member, where the best member is defined as the one nearest to the observed data in the weather system state space.
As Raffaele states in the previous answer there is a large number of approaches to post processing. I would recommend:
First bias correct each model forecast according to the behaviour in retrospective hindcasts which should be available for the same start time and the same lead time. Model bias will grow with time so the simplest way to do this does not involve any observations, it is simply a matter of subtracting the average of all hindcasts with the same start date and same lead time from each prediction. Once this is done you have a forecast anomaly.
You may then compare the predicted anomalies with the observations and recalibrate the forecasts. In some cases, such as in the tropics, the forecasts are too close together and are said to be too confident. In this case you can easily recalibrate the ensemble members by inflating their spread. In some other cases the forecasts may actually be underconfident and in this case the ensemble mean can be inflated while preserving the total variance to give the best predictions. See this paper for an example of this for the North Atlantic Oscillation: Article Do seasonal to decadal climate predictions underestimate the...
You might also try a simple regression of bias corrected forecasts against observations to work out the best relationship as was done here: Article Seasonal Forecasts of the Summer 2016 Yangtze River Basin Rainfall