There is an implementation of the gradient boosting algorithm in the Shogun library but it is restricted, in that it only supports regression tasks. The algorithm is implemented in the CStochasticGBMachine class. The main parameters to configure objects of this class are the base ensemble algorithm model and the loss function, while other parameters include the number of iterations, the learning rate, and the fraction of training vectors to be chosen randomly at each iteration.
We will create an example using gradient boosting for cosine function approximation, assuming that we already have a training and testing dataset available (the exact implementation of a data generator can be found in the source ...