Resultados de convergencia global mejorados para el Método de Lagrangiano aumentado con función de penalidad exponencial

Autores

  • Frank Navarro-Rojas Facultad de Ciencias. Universidad Nacional de Ingeniería, Lima, Perú.
  • Luis G. Ticona Quispe Facultad de Ciencias. Universidad Nacional de Ingeniería, Lima, Perú.

DOI:

https://doi.org/10.17268/sel.mat.2024.02.02

Palavras-chave:

Programación no lineal, métodos Lagrangiano aumentado, función de penalización exponencial, convergencia global, condiciones de calificación, condiciones de optimalidad secuencial

Resumo

El propósito del presente artículo es el de mejorar los resultados de convergencia global establecidos hasta ahora, referentes al Algoritmo de Lagrangiano aumentado con función de penalidad exponencial para resolver problemas de programación no lineal con restricciones de igualdad y de desigualdad. Demostramos la convergencia global para puntos KKT bajo la condición de calificación PAKKT-regular, que resulta como una consecuencia de que puntos de acumulación generados por el algoritmo son puntos PAKKT. Este resultado de convergencia es nuevo para el Método Lagrangiano aumentado basado en la función de penalización exponencial. Una consecuencia interesante es que las estimaciones de los multiplicadores de Lagrange calculadas por el método permanecen acotados en presencia de la condición de cuasi-normalidad. Finalmente damos resultados de optimalidad y viabilidad para el caso convexo.

Biografia do Autor

Frank Navarro-Rojas, Facultad de Ciencias. Universidad Nacional de Ingeniería, Lima, Perú.

Tengo amplia experiencia como profesor de matemáticas, para todos los niveles de educación y de un año como data science . Tengo experiencia programando en C, C++, MATLAB, tengo estudios de maestria y doctorado en matemática aplicada en el área de optimización y teoría de juegos. Durante la posgraduación me interese mucho en asuntos de machine learning y a programar en R y python. 

Actualmente soy docente contratado en la UNASAM, en la escuela de matemática

Referências

Powell M JD. A method for nonlinear constraints in minimization problems. Optimization. 1969; 283-298.

Hestenes M R. Multiplier and gradient methods. Journal of optimization theory and applications. 1969; 4(5):303-320.

Rockafellar R T. Augmented Lagrange multiplier functions and duality in nonconvex programming. SIAM Journal on Control. 1974; 12(2): 268-285.

Fiacco A V, Garth P. McCormick. Nonlinear programming: sequential unconstrained minimization techniques. Society for Industrial and Applied Mathematics, 1990.

Fletcher R. Practical methods of optimization. John Wiley and Sons, 2000.

Birgin EG, Martínez JM. Practical augmented Lagrangian methods for constrained optimization. Society for Industrial and Applied Mathematics, 2014.

Andreani R, et al. On augmented Lagrangian methods with general lower-level constraints. SIAM Journal on Optimization. 2008; 18(4): 1286-1309.

Kanzow Ch, Steck D. An example comparing the standard and safeguarded augmented Lagrangian methods. Operations Research Letters. 2017; 45(6):598-603.

Andreani R, et al. A cone-continuity constraint qualification and algorithmic consequences. SIAM Journal on Optimization. 2016; 26(1):96-110.

Andreani R. et al. A sequential optimality condition related to the quasi-normality constraint qualification and its algorithmic consequences. SIAM Journal on Optimization. 2019; 29(1):743-766.

Andreani R, et al. New sequential optimality conditions for mathematical programs with complementarity constraints and algorithmic consequences. SIAM Journal on Optimization. 2019; 29(4):3201-3230.

Izmailov AF, Solodov MV, Uskov EI. Global convergence of augmented Lagrangian methods applied to optimization problems with degenerate constraints, including problems with complementarity constraints. SIAM Journal on Optimization. 2012; 22(4):1579-1606.

Kansow Chr. On the multiplier-penalty-approach for quasi-variational inequalities. Mathematical Programming. 2016; 160: 33-63. Doi: 10.1007/s10107-015-0973-3.

Kanzow Chr, Steck D. Augmented Lagrangian methods for the solution of generalized Nash equilibrium problems. SIAM Journal on Optimization. 20216; 26(4):2034-2058.

Bueno LF, Haeser G, Lara F, Rojas FN. An augmented Lagrangian method for quasi-equilibrium problems. Computational Optimization and Applications. 2020; 76: 737-766. Doi: 10.1007/s10589-020-00180-4

Andreani R, Haeser G, Viana DS. Optimality conditions and global convergence for nonlinear semidefinite programming. Mathematical Programming. 2020; 180:203-235.

Birgin EG, et al. An Augmented Lagrangian algorithm for nonlinear semidefinite programming applied to the covering problem. Comp. and Appl. Math. 2020; 39:1-21.

Kanzow, Chr., Raharja AB, Schwartz A. An augmented Lagrangian method for cardinality-constrained optimization problems. J. of Optimization Theory and Appl. 2021; 189(3):793-813.

Fazzio NS. Teoría y métodos para problemas de optimización multiobjetivo[Tesia de Doctorado]. Universidad Nacional de La Plata, 2018.

Bueno LF, Haeser G, Lara F, Rojas FN. An augmented Lagrangian method for quasi-equilibrium problems. Computational Optimization and Appl. 2020; 76:737-766. Doi: 10.1007/s10589-020-00180-4

Birgin EG, Castillo RA, Martínez JM. Numerical comparison of augmented Lagrangian algorithms for nonconvex problems. Comp. Optimization and Appl. 2005; 31(1):31-55.

Tseng P, Bertsekas PD. On the convergence of the exponential multiplier method for convex programming. Mathematical programming. 1993; 60(1):1-19.

Kort BW, Bertsekas DP. December). Multiplier methods for convex programming. In 1973 IEEE Conference on Decision and Control including the 12th Symposium on Adaptive Processes (pp. 428-432). IEEE.1973.

Kort BW, Bertsekas DP. Combined primal–dual and penalty methods for convex programming. SIAM Journal on Control and Optimization. 1976; 14(2):268-294.

Bertsekas DP. Constrained optimization and Lagrange multiplier methods. Academic press, 2014.

Birgin EG, Castillo RA, Martínez JM. ”Numerical comparison of augmented Lagrangian algorithms for nonconvex problems.C¸ omputational Optimization and Applications 31.1 (2005): 31-55.

Iusem AN. Augmented Lagrangian methods and proximal point methods for convex optimization. Investigación Operativa; 1999; 8(7):11-49.

Wang Ch-Yu, Duan L. Unified theory of augmented Lagrangian methods for constrained global optimization. J. of Global Optimization. 2009; 44: 433-458.

Echebest N, Sánchez MD, and Schuverdt ML. Convergence results of an augmented Lagrangian method using the exponential penalty function. J. of Optimization Theory and Applications. 2016; 168:92-108.

Ramirez LM. The hyperbolic augmented lagrangian algorithm (the carioca algorithm)[Thesis doutorado]. Universidade Federal do Rio de Janeiro, 2022.

Sanchez MD, Schuverdt ML. A second-order convergence augmented Lagrangian method using nonquadratic penalty functions. Opsearch. 2019; 56: 390-408.

Andreani R, et al. Two new weak constraint qualifications and applications.”SIAM Journal on Optimization. 2012; 22(3):1109-1135.

Andreani R, et al. A cone-continuity constraint qualification and algorithmic consequences. SIAM J. on Optimization. 2016; 26(1):96-110.

Andreani R, Haeser G, Martínez JM. On sequential optimality conditions for smooth constrained optimization. Optimization; 2011; 60(5):627-641.

Bertsekas DP. Nonlinear programming. J. of the Operational Research Society. 1997; 48(3): 334-334.

Andreani R, et al. On the best achievable quality of limit points of augmented Lagrangian schemes. Numerical Algorithms. 2022; 1-27.

Andreani R, et al. On scaled stopping criteria for a safeguarded augmented Lagrangian method with theoretical guarantees. Math. Programming Comp. 2022; 14(1):121-146.

Andreani R, et al. On the convergence of augmented Lagrangian strategies for nonlinear programming. IMA J. of Numerical Analysis. 2022; 42(2):1735-1765.

Prado RW, Santos SA, Simoes LEA. A novel sequential optimality condition for smooth constrained optimization and algorithmic consequences. Optimization. 2024; 73(5): 1447-1476.

Andreani R, Martínez JM, Svaiter BF. A new sequential optimality condition for constrained optimization and algorithmic consequences. SIAM J. on Optimization. 2010; 20(6):3533-3554.

Andreani R, et al. Strict constraint qualifications and sequential optimality conditions for constrained optimization. Mathematics of Operations Research, 2018; 43(3):693-717.

Qi L,Wei Z. On the constant positive linear dependence condition and its application to SQP methods. SIAM J. on Optimization. 2000; 10(4):963-981.

Andreani R, et al. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming. 2012; 135(1):255-273.

Andreani R, Martínez JM, Svaiter BF. A new sequential optimality condition for constrained optimization and algorithmic consequences. SIAM J. on Optimization. 2010; 20(6):3533-3554.

Haeser G, Schuverdt ML. On approximate KKT condition and its extension to continuous variational inequalities. J. of Optimization Theory and Appl. 2011; 149(3):528-539.

Martínez JM, Svaiter BF. A practical optimality condition without constraint qualifications for nonlinear programming. J. of Optimization Theory and Appl. 2003; 118:117-133.

Rockafellar RT, Wets RJ-B. Variational analysis. Vol. 317. Springer Science and Business Media, 2009.

Sánchez MD. Métodos de Lagrangiano Aumentado basados en funciones de penalidad no cuadráticas[Tésis Doctorado]. Universidad Nacional de La Plata, 2017.

Publicado

2024-12-28

Como Citar

Navarro-Rojas, F., & Ticona Quispe, L. G. (2024). Resultados de convergencia global mejorados para el Método de Lagrangiano aumentado con función de penalidad exponencial. Selecciones Matemáticas, 11(02), 222 - 235. https://doi.org/10.17268/sel.mat.2024.02.02

Edição

Seção

Articles