The main idea of boosting is that the elementary algorithms are not built independently. We build every sequential algorithm so that it corrects the mistakes of the previous ones and therefore improves the quality of the whole ensemble. The first successful version of boosting was AdaBoost (Adaptive Boosting). It is now rarely used since gradient boosting has supplanted it.
Suppose that we have a set of pairs, where each pair consists of attribute x and target variable y, . On this set, we restore the dependence of the form . We restore it by the approximation . To select the best approximation ...