Suppose first that you have two sets of EQUALITY constraints; seems to me you could relax ALL of them (setting their LMs to some arbitrary, but possibly well-chosen, values); you are basically concatenating all the multipliers into one big vector; there may be numerical ill-conditioning issues. Suppose all is well, then the sub-gradient step will move the LMs in the direction that improves the error on each violated constraint.
However, the bigger issue is that you appear to have some inequality constraints. Could these first be converted by introduction of slack variables? If not, you will need KKT conditions.
The subgradient is a vector with one component for each Lagrangian multiplier. The standard way to compute a subgradient is to set the value of a given component to the violation of the corresponding constraint, possibly normalised in some way. Then the current vector of multipliers is updated by adding the subgradient vector multiplied by a small constant. Try looking at the papers by John Beasley on subgradient methods for location problems. They are very clear.
I take the opportunity to thank you for those good answers to @ Morteza Shabanzadeh, I am in the same case as @ Morteza Shabanzadeh. I have implemented subgradient algorithm regarding the two sets of dualized constraints, the problem I am meeting is that LLB I is big that optimal value of original problem. Can you advise me?
Placide Nduwayo , if you have a minimisation problem, then the lower bound that you get with Lagrangian relaxation should not exceed the cost of the optimal solution to the original problem, regardless of what multipliers you use. If it does, then there must be a bug either in your code or in your relaxation.
Thank you Adam N. Letchford I think it that, I am testing lot of instances for many of them the value i find is good but for athers LLB is big than optimal value of original problem
I am wondering whether did you get the answer to your question? If so, I would appreciate it if you could help me figure out how to solve my problem which is relaxed with two simultaneous lagrangian multipliers.
I take the opportunity to thank you for those good answers to @ Morteza Shabanzadeh. I have a question, the formulation of the step size updation in subgradient LR method have a denominator“||Ax-b||”,if||Ax-b||=0,How can I determine the step size?
Yansong Bai within subgradient methods zero subgradient implies that the optimal solution is obtained and the algorithm terminates. For more information on stepsizes please refer to https://ieeexplore.ieee.org/abstract/document/9112667
Thank you Mikhail A. Bragin very much and I will read this paper later. Actually, I want to find the optimal "lambda" corresponding to the lagrangian relaxation to generate the . The objective function can be expressed as "(min)cx+lambda*(Ax-b)"(Ax=b is the equality constraints ). If the optimal solution is obtained, and Ax-b=0, then lambda can take different values,which the objective function value will not be influence.Does it mean the optimal solution(variable x) can corresponding to multiple values of lambda?
Yansong Bai : The term Ax - b is a vector, and ||Ax - b|| is its norm. (I think they usually use the standard Euclidean norm, but I'm not sure.) So, if the solution to the current relaxation satisfies ||Ax-b|| = 0, then Ax = b, and x is the optimal solution of the original problem.
(Note that this argument applies only when all of the relaxed constraints are *equations*. If you relax inequalities, then things are more complicated, and you have to consider the complementary slackness conditions.)
Adam N. Letchford ,there is another problem needing your help,when ||Ax-b||=0 or is close to 0,the lambda can change greatly,that is,sum(lambda(k)-lambda(k-1)) can equal to a large number while ||Ax-b|| is equal to 0 or close to 0.
Thank you Adam N. Letchford for this valuable advice. I have use the multiplier update of the subgradient method(lambda(k)=lambda(k-1)+t(k)*(Ax-b),t(k)=(ZUB-ZD)/||Ax-b||^2,ZUB(upper bound) ). The data of process can be,
Note that the objective function value (ZD) of the iteration 2 has not changed.
。。。。
I guess there must be something wrong with it, so it didn't go on,Is there something wrong with my understanding of subgradient iteration and the parameter installation?
Yansong Bai : sorry, but I can't understand the question that you posted on 25th April 2021. What is the original problem, and which constraints are you relaxing?
Well , Adam N. Letchford ,I thought before that the meaning of Lagrangian multiplier is the change of the objective function corresponding to the unit change at the right end of the constraint, so I try to change the unit amount at the right end of the constraint corresponding to the dual multiplier to see if I can find the corresponding dual multiplier according to the change of the objective function.