Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/9775
Title: | Graph Classification with Minimum DFS Code: Improving Graph Neural Network Expressivity |
Authors: | Gupta, Jhalak |
Keywords: | Classification (of information)|Graph structures|Graphic methods|Long short-term memory|Network coding|Aggregation schemes|AS graph|Canonical form|Condition|Design graphs|Downstream applications|Graph classification|Graph isomorphism|Graph neural networks|Long-range dependencies|Graph neural networks |
Issue Date: | 2021 |
Publisher: | Institute of Electrical and Electronics Engineers Inc. |
Citation: | Gupta, J., & Khan, A. (2021). Graph classification with minimum DFS code: Improving graph neural network expressivity. Paper presented at the Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021, 5133-5142. doi:10.1109/BigData52589.2021.9671470 Retrieved from www.scopus.com |
Abstract: | Graph neural networks (GNNs) generally follow a recursive neighbors aggregation scheme. Recent GNNs are not powerful than the 1-Weisfeiler Lehman test, which is a necessary but insufficient condition for graph isomorphism, hence limiting their abilities to utilize graph structures properly. Moreover, deep GNNs with many convolutional layers suffer from over-smoothing, thus cannot capture long-range dependencies. As a result, downstream applications, such as graph classification, are impacted. To this end, we design GNNs on top of the minimum DFS code, which is a canonical form of a graph, and being injective it captures the graph structure precisely along with node and edge labels. Due to the sequential structure of the minimum DFS code, we employ state-of-the-art RNNs (LSTM, BiLSTM, GRU) and Transformer-based sequence classification techniques. While one can compute the minimum DFS code efficiently in practice, LSTM, BiLSTM, GRU, and Transformers capture long-term dependencies in arbitrary length sequences. We also consider a novel variant of the minimum DFS code, which is not injective, but it reduces the complexity of the feature space, increases generalizability, and also improves the classification performance over many real-world graph datasets. Our thorough empirical comparisons with six real-world network datasets demonstrate the accuracy and efficiency of our methods. We have open-sourced our solution framework [17] in which one can plug in different graph datasets and get their classification results. This will benefit researchers and practitioners, biologists, social scientists, and data scientists, among others. © 2021 IEEE. |
URI: | https://dspace.iiti.ac.in/handle/123456789/9775 https://doi.org/10.1109/BigData52589.2021.9671470 |
ISBN: | 978-1665439022 |
Type of Material: | Conference Paper |
Appears in Collections: | Department of Computer Science and Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: