Now that we have some knowledge of the logistic function, it is easy to map it to the algorithm that stems from it. In logistic regression, the function input *z *becomes the weighted sum of features. Given a data sample *x *with *n* features, *x _{1}, x_{2}, …, x_{n}* (

*x*represents a feature vector and

*)), and*

**x**= (x_{1}, x_{2}, …, x_{n}**weights**(also called

**coefficients**) of the model

**(**

*w***represents a vector (**

*w**w*)),

_{1}, w_{2}, …, w_{n}*z*is expressed as follows:

Also, occasionally, the model comes with an **intercept** (also called **bias**), *w _{0}*. In this instance, the preceding linear relationship becomes:

As for the output ...