Hello, I currently solve the optimization problem (please see the attached figure).

Basically, this problem is equivalent to find the confidence interval for logistic regression. The objective function is linear (no second derivative), meanwhile, the constraint is non-linear. Specifically, I used n = 1, alpha = 0.05, theta = logit of p where p = [0,1] (for detail, please see binomial distribution). Thus, I have a closed-form solution for the gradient and jacobian for objective and constraints respectively.

In R, I first tried the alabama::auglag function which used augmented Lagrangian method with BFGS (as a default) and nloptr::auglag function which used augmented Lagrangian method with SLSQP (i.e. SLSQP as a local minimizer). Although they were able to find the (global) minimizer most time, sometimes they failed and produced a far-off solution. After all, I could obtain the best (most stable) result using SLSQP method (nloptr::nloptr with algorithm=NLOPT_LD_SLSQP).

Now, I have a question of why SLSQP produced a better result in this setting than the first two methods and why the first two methods (augmented Lagrangian with BFGS and SLSQP as a local optimizer) did not perform well. Another question is, considering my problem setting, what would be the best method to find the optimizer?

Any comments and suggestions would be much appreciated. Thanks.

Similar questions and discussions