O'Reilly logo

Statistical and Machine Learning Approaches for Network Analysis by Subhash C. Basak, Matthias Dehmer

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

11.9 Distance Weighting

The kernels introduced so far have the advantage that the hypernym and hyponym hypotheses are reflected by the calculation of the graph kernel. A further possible improvement is to assign weights to the product graph nodes depending on the distance of the associated nodes to the hyponym and hypernym candidates. Such a weighting is suggested by the fact that edges located nearby the hypernym and hyponym candidates are expected to be more important for estimating the correctness of the hypothesis. It will be shown that a distance calculation is possible with minimum overhead just by using the intermediate results of the kernel computations. Let us define the matrix function img the function that sets an entry in a matrix to one, if the associated component of the argument matrix M = (mxy) is greater than zero and to zero otherwise. Let (hxy) = B(M), then

(11.13) equation

Let Aa1∨a2 be the adjacency matrix where all entries are set to zero, if either the column or row of the entry is associated to a1 or a2. Then a matrix entry of B(Aa1∨a2APG) is one, if there exists a common walk of length two between the two nodes and one of them is a1 and a2. Let us define Ci as img. An entry ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required