Partial Pearson-two (PP2) of quasi newton method for unconstrained optimization

Main Article Content

Basheer M. Salih
Khalil K. Abbo
Zeyad M. Abdullah

Abstract

In this paper, we developing new quasi-Newton method for solving unconstrained optimization problems .The nonlinear Quasi-newton methods is widely used in unconstrained optimization[1]. However,. We consider once quasi-Newton which is (Pearson-two) update formula [2], namely, Partial P2. Most of quasi-Newton methods don't always generate a descent search directions, so the descent or sufficient descent condition is usually assumed in the analysis and implementations [3] . Descent property for the suggested method is proved. Finally, the numerical results show that the new method is also very efficient for general unconstrained optimizations [4].

Article Details

How to Cite
Basheer M. Salih, Khalil K. Abbo, & Zeyad M. Abdullah. (2023). Partial Pearson-two (PP2) of quasi newton method for unconstrained optimization. Tikrit Journal of Pure Science, 21(3), 174–179. https://doi.org/10.25130/tjps.v21i3.1012
Section
Articles

References

[1] B.T. Polyak. Introduction to Optimization. Optimization Software, New York, 1987.

[2] Chong K.P.E and Zak, H.S., " An Introduction to optimization", John Wiley & Sons, INC. New york / chichester/ Weinheim/ Brisbane/ Singapore / Toronto, USA. (2001).

[3] W.C. DAVIDON, Variable metric methods for minimization, Atomic Energy Commission Research, (1959). And Development Report AWL-5990, Argonne National Laboratory, Argonne, IL.

[4] Z. X. Wei, G. H. Yu, G. L. Yuan and Z. G. Lian, The superlinear convergence of a modified BFGS-Type method for unconstrained optimization, Computational optimization and applications 29 (2004) 315-332.

[5] J.D. Pearson. "Variable metric methods of minimization" Computer Journal. 12 (1969), 171-178.

[6] Nocedal. J., "Theory of Algorithms for Unconstrained Optimization", 199-242, Acta numarica , USA. (1992).

[7] Sugiki K. Narushima Y. and Yube H. ' Globally convergence three- term conjugate gradient methods that uses secant conditions and generate descent search direction, for unconstrained optimization. J. of optimization theory and Applications 153. (2012).

[8] Byrd H., Nocedal J. and Yuan Y.. Global convergence of a class of Quasi-Newton methods on convex problems SIAM J-NUMERANAL vol. (24), No.(5), (1987).

[9] G. Zoutendijk. Nonlinear programming, computational methods. In J. Abadie, editor, Integer and Nonlinear Programming, pages 37{86. North-Holland, Amsterdam,1970.

[10] Bongartz, I., A.R. Conn, N.I.M. Gould and Ph.L. Toint,. CUTE: Constrained and unconstrained testing environment: ACM Trans. Math. (1995) Software, 21:123-160. http://portal.acm.org/citation.cfm?doid=200979.201043.

[11] Andrei, N., Scaled conjugate gradient algorithms for unconstrained optimization. Computational Optimization and Applications 38, 401-416 (2007).

[12] Andrei, N., A hybrid conjugate gradient algorithm with modified secant condition for unconstrained optimization. ICI Technical Report, February 6, 2008.

[13] Dolan, E.D. and J.J. Moré,. Benchmarking optimization software with performance profiles. Math. Program., 91: 201-203, ((2002). DOI: 10.1007/ s101070100263.