Partial Pearson-two (PP2) of quasi newton method for unconstrained optimization
Main Article Content
Abstract
In this paper, we developing new quasi-Newton method for solving unconstrained optimization problems .The nonlinear Quasi-newton methods is widely used in unconstrained optimization[1]. However,. We consider once quasi-Newton which is (Pearson-two) update formula [2], namely, Partial P2. Most of quasi-Newton methods don't always generate a descent search directions, so the descent or sufficient descent condition is usually assumed in the analysis and implementations [3] . Descent property for the suggested method is proved. Finally, the numerical results show that the new method is also very efficient for general unconstrained optimizations [4].
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Tikrit Journal of Pure Science is licensed under the Creative Commons Attribution 4.0 International License, which allows users to copy, create extracts, abstracts, and new works from the article, alter and revise the article, and make commercial use of the article (including reuse and/or resale of the article by commercial entities), provided the user gives appropriate credit (with a link to the formal publication through the relevant DOI), provides a link to the license, indicates if changes were made, and the licensor is not represented as endorsing the use made of the work. The authors hold the copyright for their published work on the Tikrit J. Pure Sci. website, while Tikrit J. Pure Sci. is responsible for appreciate citation of their work, which is released under CC-BY-4.0, enabling the unrestricted use, distribution, and reproduction of an article in any medium, provided that the original work is properly cited.
References
[1] B.T. Polyak. Introduction to Optimization. Optimization Software, New York, 1987.
[2] Chong K.P.E and Zak, H.S., " An Introduction to optimization", John Wiley & Sons, INC. New york / chichester/ Weinheim/ Brisbane/ Singapore / Toronto, USA. (2001).
[3] W.C. DAVIDON, Variable metric methods for minimization, Atomic Energy Commission Research, (1959). And Development Report AWL-5990, Argonne National Laboratory, Argonne, IL.
[4] Z. X. Wei, G. H. Yu, G. L. Yuan and Z. G. Lian, The superlinear convergence of a modified BFGS-Type method for unconstrained optimization, Computational optimization and applications 29 (2004) 315-332.
[5] J.D. Pearson. "Variable metric methods of minimization" Computer Journal. 12 (1969), 171-178.
[6] Nocedal. J., "Theory of Algorithms for Unconstrained Optimization", 199-242, Acta numarica , USA. (1992).
[7] Sugiki K. Narushima Y. and Yube H. ' Globally convergence three- term conjugate gradient methods that uses secant conditions and generate descent search direction, for unconstrained optimization. J. of optimization theory and Applications 153. (2012).
[8] Byrd H., Nocedal J. and Yuan Y.. Global convergence of a class of Quasi-Newton methods on convex problems SIAM J-NUMERANAL vol. (24), No.(5), (1987).
[9] G. Zoutendijk. Nonlinear programming, computational methods. In J. Abadie, editor, Integer and Nonlinear Programming, pages 37{86. North-Holland, Amsterdam,1970.
[10] Bongartz, I., A.R. Conn, N.I.M. Gould and Ph.L. Toint,. CUTE: Constrained and unconstrained testing environment: ACM Trans. Math. (1995) Software, 21:123-160. http://portal.acm.org/citation.cfm?doid=200979.201043.
[11] Andrei, N., Scaled conjugate gradient algorithms for unconstrained optimization. Computational Optimization and Applications 38, 401-416 (2007).
[12] Andrei, N., A hybrid conjugate gradient algorithm with modified secant condition for unconstrained optimization. ICI Technical Report, February 6, 2008.
[13] Dolan, E.D. and J.J. Moré,. Benchmarking optimization software with performance profiles. Math. Program., 91: 201-203, ((2002). DOI: 10.1007/ s101070100263.