images CHAPTER 2

Neural Lazy Local Learning

J. M. VALLS, I. M. GALVÁN, and P. ISASI

Universidad Carlos III de Madrid, Spain

2.1 INTRODUCTION

Lazy learning methods [13] are conceptually straightforward approaches to approximating real- or discrete-valued target functions. These learning algorithms defer the decision of how to generalize beyond the training data until a new query is encountered. When the query instance is received, a set of similar related patterns is retrieved from the available training patterns set and is used to approximate the new instance. Similar patterns are chosen by means of a distance metric in such a way that nearby points have higher relevance.

Lazy methods generally work by selecting the k nearest input patterns from the query points. Usually, the metric used is the Euclidean distance. Afterward, a local approximation using the samples selected is carried out with the purpose of generalizing the new instance. The most basic form is the k-nearest neighbor method [4]. In this case, the approximation of the new sample is just the most common output value among the k examples selected. A refinement of this method, called weighted k-nearest neighbor [4], can also be used, which consists of weighting the contribution of each of the k neighbors according to the distance to the new query, giving greater weight to closer neighbors. Another strategy to determine the ...

Get Optimization Techniques for Solving Complex Problems now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.