
Preparing
Input Data
267
algorithms impose fixed limits on their weights to prevent getting
stuck out in the far forty. These algorithms will simply not be able to
learn such extreme data. We can make the network's life a lot easier
by giving it data scaled in such a way that all weights can remain in
small, predictable ranges.
This becomes even more important when the data is being
learned by output neurons. Most training algorithms minimize the
total error of all outputs. If the output variables are unequally scaled,
those with larger variabilities will be favored, as they will dominate
the error sum. This can have profound consequences an ...