Chapter 5

Gradient Descent Methods for Type-2 Fuzzy Neural Networks

Abstract

Given an initial point, if an algorithm tries to follow the negative of the gradient of the function at the current point to reach a local minimum, we face the most common iterative method to optimize a nonlinear function: the GD method. The main goal of this chapter is to briefly discuss a multivariate optimization technique, namely the GD algorithm, to optimize a nonlinear unconstrained problem. The referred optimization problem is a cost function of a FNN, either type-1 or type-2, in this chapter. The main features, drawbacks and stability conditions of these algorithms are discussed.

Keywords

Gradient descent for FNN

Levenberg-Marquardt for FNN

Momentum-term ...

Get Fuzzy Neural Networks for Real Time Control Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.