Network representation matrices and their eigenproperties: a comparative study

Johannes Lutzeyer, Andrew Walden, Engineering And Physical Sciences Research Council
2020
Typically, network structures are represented by one of three different representation matrices: the adjacency matrix, the unnormalised and the normalised graph Laplacian matrices. We review their spectral (eigenvalue) and eigenvector properties and recall several relations to graph theoretic quantities. We compare the representation spectra pairwise by applying an affine transformation to one of them, which enables comparison whilst preserving certain key properties such as normalised
more » ... . Bounds are given on the eigenvalue and normalised eigengap differences thus found, which depend on the minimum and maximum degree of the graph. It is found that if the degree extreme difference is large, different choices of representation matrix may give rise to disparate inference drawn from network analysis; smaller degree extreme differences result in consistent inference, whatever the choice of representation matrix. In order to compare the representation matrix eigenvectors, we extend the applicability and tighten the Davis--Kahan theorem by making use of polynomial matrix transformations. Our extended version of the Davis--Kahan theorem enables the comparison of the spaces spanned by sets of eigenvectors of any two symmetric matrices, such as the representation matrices. We then discuss how globally optimal bound values of this extended theorem can be computed in practice for affine transformations using fractional programming theory. We make use of our implementation of the extended Davis--Kahan theorem to compare the spaces spanned by the eigenvectors of the different graph representation matrices for a range of different stochastic blockmodel graphs and of covariance matrices in a spiked covariance model.
doi:10.25560/82477 fatcat:x4fu5ifeazbxpf4gnnhzsbyvje