Modified new conjugate gradient method for Unconstrained Optimization
Main Article Content
Abstract
The current paper modified method of conjugate gradient for solving problems of unconstrained optimization. The modified method convergence is achieved by assuming some hypotheses. The statistical results demonstrate that the modified method is efficient for solving problems of Unconstrained Nonlinear Optimization in comparison with methods FR and HS
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Tikrit Journal of Pure Science is licensed under the Creative Commons Attribution 4.0 International License, which allows users to copy, create extracts, abstracts, and new works from the article, alter and revise the article, and make commercial use of the article (including reuse and/or resale of the article by commercial entities), provided the user gives appropriate credit (with a link to the formal publication through the relevant DOI), provides a link to the license, indicates if changes were made, and the licensor is not represented as endorsing the use made of the work. The authors hold the copyright for their published work on the Tikrit J. Pure Sci. website, while Tikrit J. Pure Sci. is responsible for appreciate citation of their work, which is released under CC-BY-4.0, enabling the unrestricted use, distribution, and reproduction of an article in any medium, provided that the original work is properly cited.
References
[1] Andrei, N. (2008). ''An unconstrained optimization test functions collection''. Adv. Model. Optim, 10(1), 147-161.
[2] Andrei, N. (2007a), "Numerical comparison of conjugate gradient algorithms for unconstrained optimization", Studies in Informatics and Control, 16.
[3] Bannas, J., Gilbert, J., Lemarechal, C. and Sagastizable, G. (2006), "Numerical Optimization, Second Eddition. Springer-Verlag Brlin".
[4] Dai, Y. and Yuan, Y. (1999), "A Nonlinear conjugate gradient method with a strong global convergence property", SIAM J, Optim, 10.
[5] Dai, Y. and Yuan,Y. (2003), "A class of globally convergent CG methods", Sci. china Ser. A, 46.
[6] Dai. Y. H and Liao. L. Z, (2001), "New conjugacy condition and related nonlinear conjugate gradient methods", Applied Mathematics and Optimization 43, pp. 87-101.
[7] Dolan. E. D and J. J. Mor´e, (2001), "Benchmarking optimization software with performance profiles", Math. Programming, 91 pp. 201-213.
[8] Fletcher, R. (1987), "Practical Methods For Optimization", John Wiley & Sons Lth.
[9] Fletcher, R. and Reeves, G. (1964), "Function minimization by gradients computer", Journal, Vol(7).
[10]Hestenes, M and Stiefel, E. (1952), "Methods of CG for solving linear Systems", J. Research Nat. Bar . standards Sec . B . 49.
[11] Kinsella. J . (2009), "Course Notes for MS4327 optimization" http:// jkcray.
Maths.ul.ie/ms4327/slides.pdf2009.
[12] Li, Z. and Weijun, Z. (2008), "Tow descent hybrid conjugate gradient methods for optimization", Journal of computational and Appl. Math.216, pp251-264.
[13] Liu, D. and Story, C. (1991), "Efficient generalized CG algorithms", part l : Theory, Journal of Optimization Theory and Applications 69.
[14] Perry, A. (1978). ''A modified conjugate gradient algorithm. Operations Research'', 26(6), 1073-1078..
[15] Polak, E. and Ribiere,G. (1969), "Not sur la convergence de directions conjugate". Rev. Franaisse Informants. Research operational, 3e Anne., 16.
[16] Tomizuka. H and Yabe. H, (2004), "A Hybrid Conjugate Gradient method for unconstrained Optimization", February, 2004, revised July, 2004.