11.1 Introduction
For many years, machine learning approaches were predominantly dealing with features, that is, attribute value pairs [1]. Such an approach is attractive for several reasons. First, feature values can easily be stored in a relational database. Second, operations like the scalar product or the radial basis function for the similarity calculation of two feature vectors are very efficient and can therefore be applied to mass-data. However, with the feature-based approach it is not possible to express relationships between variables. Furthermore, the original data is often directly represented as graph and has no natural feature representation. A conversion of a graph representation into features can seriously degrade precision and recall. Therefore, graph-based machine learning methods (also called statistical relational learning) are becoming more and more popular in the past years. Often employed graph-based machine learning methods are graph kernels, conditional random fields, and graph substructure learning.
A graph kernel is a similarity function for two graphs where the matrix of kernel values is positive-semidefinite and symmetric. Graph kernels are often used together with support vector machines and nearest neighbor methods for classification of graphs. Applications are image classification based on recognized structures [2], semantic relation extraction from texts [3-7] or estimating certain properties (e.g., toxicity) of molecules [8].
Conditional random ...
Get Statistical and Machine Learning Approaches for Network Analysis now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.