Worldwide Sites

You have been detected as being from . Where applicable, you can see country-specific product information, offers, and pricing.

Change country/language X

Keyboard ALT + g to toggle grid overlay

Contrastive Multi-View Representation Learning on Graphs

Kaveh Hassani, Amir Hosein Khasahmadi

International Conference on Machine Learning
2020

Abstract

We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the number of views to more than two or contrasting multi-scale encodings do not improve performance, and the best performance is achieved by contrasting encodings from first-order neighbors and a graph diffusion. We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol. For example, on Cora (node) and Reddit-Binary (graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which are 5.5% and 2.4% relative improvements over previous state-of-the-art. When compared to supervised baselines, our approach outperforms them in 4 out of 8 benchmarks.

Related Publications

Related Projects

Heading

Descriptive text. Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt magna aliqua in reprehenderit.

Heading

Descriptive text. Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt magna aliqua in reprehenderit.

Heading

Descriptive text. Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt magna aliqua in reprehenderit.