Opposition-based Learning (OBL) is a new concept in machine learning, inspired from the opposite relationship among entities. This concept is used to initial population that randomly generated.
I think Its really depend on how you initialize the population.
I assume that you randomize population S with n solution, then take the opposition population S^ based on S. Now, there are two possible ways you can do here is:
+ 1st: Compare each solution in S and S^ with their respective index. Take the solution with better fitness.
+ 2nd: You concatenate S and S^ then take n solution with the best fitness.
Both ways have the weakness is: Maybe, all position of all solution placed at the 1 side of search space. Especially, when that side belongs local optima, It will make your algorithm fast convergence (but in local optima)
I think the best way to use the opposition-based technique for the initial population is:
You randomize population S with n/2 solution, then create n/2 solution left with opposition-based technique. This way make sure that all position doesn't fall in the 1 side of the search space.
The class of metaheuristics has a grave disadvantage: the fact that there is no termination criterion based on optimality or stationarity. Hence, you may - but probably not - pass by an optimal solution, and you will not know it. Methods bases on optimality theory are better equipped than these.