Appendix C. Cost Function Optimization
In this appendix, we review a number of optimization schemes that have been encountered throughout the book.
Let θ be an unknown parameter vector and J (θ) the corresponding cost function to be minimized. Function J(θ) is assumed to be differentiable
C.1. Gradient Descent Algorithm
The algorithm starts with an initial estimate θ(0) of the minimum point and the subsequent algorithmic iterations are of the form(C.1)(C.2)where μ > 0. If a maximum is sought, the method is known as gradient ascent and the minus ...
Get Pattern Recognition, 4th Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.