The easy answer is it depends. If you want an accurate representation of the running time, you probably want to exhaust your tests to different sets of weights; in particular ones that will cause your implementation to take its longest. It really depends on what your goals are in the research.
A common practice is to pick a set of weights that normally cause issues with a lot of problems with these types of algorithms. Pick ones that cause the worst behaviour in terms of running time or storage for your algorithm. Another approach or maybe a handy one to do as well is maybe a randomized test to maybe get an "average" idea of how well it does. If it is a novel approach, it would be best to take a look at other papers that approach the same problem when it comes to experimentation, and maybe what they did to test the implementation of theirs. Keep in mind that picking ones that make the easiest "work" for the program are not what you want unless you are just wanting to contrast how quickly it could perform (analogous to a best-case analysis, but not the same), versus how bad it could perform (analogous to a worst-case analysis).
The method of design for your experiments will depend heavily on what problem you are considering. For example, if this is a scheduling problem, you may need to find very good examples of sets for the weights to describe (what I said above) for the tests; since the number of jobs vary typically with a parameter, likewise m. That's what I would do at least. For example, if a lot of these implementations don't do well when you give the weights in reverse sorted order, maybe make it so your test cases have that.
I hope this helps! I know it isn't that much, but it at least should help go in the right direction.
If you solve a multi-objective problem then different weight sets usually provide different solutions.
The point is that a solution of a multi-objective problem is normally defined as being a Pareto optimal point. This is a feasible point x that is not dominated by another feasible point y, i.e., not all objective function values of y are smaller than the corresponding objective function values of x.
Usually, if you choose different weights, different Pareto optimal points will be the result. In the convex case you can even prove that you can (almost) find the whole Pareto frontier (i.e. all Pareto optimal points) by using all possible combination of weights. If the problem is non-convex the situation is more complicated, e.g., you can miss some of the Pareto frontier, etc.
So the answer to your question is in general "yes".
However, there is one exception that comes from the applications. If you have reason to fix specific weights for your multi-objective problem (e.g. you fix a unit (e.g. $ for costs) and you can scale every objective properly, converting the corresponding units also to $, i.e., assigning a cost to each objective) then you can do that. But in that case you do not need a loop and solve the multi-objective problem once with a fixed set of weights.
As Hermann pointed out it is the nature of a multiobjective optimization problem that the problem itself does not define a unique optimal value. The results (in terms of objective values) are vectors and these do not form an ordered set. So every Pareto optimal point is a solution - and you'll normally get different ones with different weights in the weighted sum approach. To choose from these a preferred compromise means applying criteria which were not modeled in the original problem.
There are papers about finding information about the decision maker's utility function by letting him do choices for different questions. If one knew the untility function then the multiobjective problem could be transformed to an ordinary one using the utility function as the (only) objective. In this sense there might be 'best' weights.
But be aware that e.g. when solving linear multoibjective optimization problems with the weighted sum objective by the simplex method results will always be vertices of the constraint polyhedron and a small change in the weigts might result in a 'jump' in the variable values. Though all Pareto optimal points are optimal for at least one set of positive weights, the relatve interior points of faces of the polyhedron could only be found by identifying the whole optimal set for each set of weights - not just one point from it.
Other scalarization methods can yield also interior points as solutions.
If your problem is nonlinear and strongly convex, you would not have this problem. But if it is nonconvex, the weighted sum approach normally only yields a subset of the Pareto set (the supported ones) as Hermann pointed out.
As pointed out by the pre-answerers, in almost any real-world multi-objective optimization there is a tradeoff of partial goals. Any decision between Pareto-optimal solutions is purely SUBJECTIVE as soon as all objective partial goals have been considered in the numerical approach before.
A sensible further analysis of the resultant Pareto set may deal with the questions of regional stability (both in parameter and solution space) as Tinkle already slightly pointed to. But this objective is beyond the original scope of a "determination of an optimal solution" by itself.
Yes, you require to change the weight vector in each run to obtain multiple tradeoff solutions. A set of such solutions is also known as Pareto-front. However, it is not necessary that you get equally diverse solutions in the Pareto-front for equally diverse weight vectors as the search space is usually partially ordered in multi-objective optimization. Moreover, you can not obtain any solution in the convex region of the Pareto-front for any weight vector. These are the common known problems of the weighted sum approach to multi-objective optimizations problems.
Drop weights in MCDM in any case. Indeed Weights in MCDM have a double meaning , namely of Normalization and of Significance. Drop rather units and go over to dimensionless measurements. See my Biblio on MULTIMOORA..
Yes when the weights are not sure , you need to change the weights. For every set of weights a solution will come and when the difference between two consecutive solutions is small the loop stops and the solution therein is the final solution
This is not clear from your question, but if your problem is not convex, you might want to explore an alternative formulation to navigating your tradeoff curve. Aside from the necessity to try different sets of weights, a weighted multi-objective can be trapped in local maxima on the pareto optimal surface (if it is not convex). A possible approach to solve the latter problem is to explore the tradeoff curve using a goal programming formulation, especially if you have some knowledge of the minimum target values for the respective objectives. That is, given a set of objectives, f1, ..., fm, navigate the tradeoff surface by solving the following problem:
min maxi { fi(x) - fi_target(x)}
You still have to scale the objectives to be dimensionless as noted earlier, but the above formulation is less prone to be trapped in a local maxima by changing the target values.
All the comments are fuzzy as they all stick to weights in MOO; Weights were introduced by Maccrimon of Rand Corporation in 1968 but they put a lot of difficulties in MOO practice. In case of different units switch rather over to dimensionless measurements like in TOPSIS, VIKOR or MULTIMOORA..
The weights of goals in multi-objective optimization problems are almost relatively. This process depends on the user demands.
The results of your problem depend on these weights.
I think: you have to change these weights until reach stability in results (the original goal not the fitness function) for one case "project"
This paper discuss similar case [F.A. Agrama, (2012) “Multi-objective Genetic Optimization of Linear Construction Projects”, Housing and Building National Research Center Journal, 8, pp. 144-151] available online
When I prepare this paper, one question is urged: can i find the best (optimal) weights set as goals, for the problem not for the run? But i can't!
May be My direction think is not right. But This question is still urge.
To plot:
You can try plot the original goal (not the fitness function) against weights