Skip to Main Content
fastText Quick Start Guide
book

fastText Quick Start Guide

by Joydeep Bhattacharjee
July 2018
Intermediate to advanced content levelIntermediate to advanced
194 pages
5h 22m
English
Packt Publishing
Content preview from fastText Quick Start Guide

Softmax

The most popular methods for learning parameters of a model is using gradient descent. Gradient descent is basically an optimization algorithm that is meant for minimizing a function, based on which way the negative gradient points toward. In machine learning, the input function that gradient descent acts on is a loss function that is decided for the model. The idea is that if we move towards minimizing the loss function, the actual model will "learn" the ideal parameters and will ideally generalize to out-of-sample or new data to a large extent as well. In practice, it has been seen this is generally the case and stochastic gradient, which is a variant of gradient descent, has a fast training time as well.

For the gradient descent ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

WebAssembly: The Definitive Guide

WebAssembly: The Definitive Guide

Brian Sletten
Spark: The Definitive Guide

Spark: The Definitive Guide

Bill Chambers, Matei Zaharia
Learning Go

Learning Go

Jon Bodner

Publisher Resources

ISBN: 9781789130997Supplemental Content