Graph Neural Networks: The Path towards Explainability

This article first appeared in Data Science Briefings, the DataMiningApps newsletter. Subscribe now for free if you want to be the first to receive our feature articles, or follow us @DataMiningApps. Do you also wish to contribute to Data Science Briefings? Shoot us an e-mail over at briefings@dataminingapps.com and let’s get in touch!

Contributed by: Elena Tiukhova under the supervision of prof. dr. Monique Snoeck and prof. dr. Bart Baesens.

Nowadays, neural networks attract an increased attention of the research community due to their ability to work with unconventional data types such as text, image, video or graph data. In particular, Graph Neural Networks (GNNs) are a class of deep learning models that can be used to analyze graph-structured data. Graph data is popular in such domains as social network analysis, fraud detection or drug discovery. GNNs have achieved state-of-the-art performance in various tasks, such as node classification, link prediction, and graph classification. However, the interpretability of GNNs has been a challenge, as they can be considered black-box models similarly to other neural networks.

In this article, we will explore the challenges of explainability in GNNs and the efforts made to make them more interpretable using eXplainable Artificial Intelligence (XAI).

Overview of GNNs

Initially, GNNs have been adapted from Convolutional Neural Networks (CNNs), which perform representation learning from image data. CNNs fail to work with graph data as it does not have a regular grid and fixed node ordering and can be of an arbitrary size. GNNs are designed to tackle the limitations of CNNs: they can perform aggregation over an arbitrary number of neighbors of a node in order to produce an embedding of this node in a Euclidean space. Many extensions of GNNs are proposed, i.e., temporal models inspired by Recurrent Neural Networks that capture temporal evolution or graph autoencoders that perform unsupervised learning. GNNs differ in their expressive power depending on the way how the aggregation on the nodes is performed, how graph structure is exploited and which downstream task is being solved. Early examples include static Spectral-based Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT). GCNs propagate information from the neighbors to obtain an embedding for a node with assigning the same importance to each neighbor [1]. GAT is an improvement on GCNs with the attention mechanism incorporated in order to weigh every connection [2]. Recent examples of GNNs are temporal GNNs that capture temporal evolutions and spatial structure simultaneously, i.e., Dynamic Self Attention (DySAT) networks [3] or Attention Temporal Graph Convolutional Network (A3T-GCN) [4]. So, it is apparent that graphs are a powerful data structure for representing complex relationships
between entities, but they can be challenging to interpret and explain. Graph XAI techniques aim to address this challenge by providing human-understandable explanations for the decisions made by GNNs.

XAI for GNNs

Due to the unique nature of graph data, existing XAI methods may not be suitable for obtaining high quality explanations for GNNs [5]. The nature of explanations of a graph is heavily determined by its structure making the explanations of particular nodes or edges different from the explanations on a graph or subgraph-level. Moreover, interpreting explanations for graphs requires considerable domain knowledge due to a complex nature of the data. Hence, XAI models for graph data should be tailored to take into account the assumptions of working with graph data simultaneously with using the best practices of traditional XAI.

Graph XAI techniques are post-hoc in their nature: the explanations are aimed at explaining internal working of a deep learning model which is not inherently interpretable. A recent survey on XAI techniques for GNNs presents a taxonomy of GNN explanation techniques [5]. This follows an approach similar to a general XAI taxonomy in a sense that a main distinction is made between modellevel explanations and instance level explanations with model-level explanation being very scarce. XGNN is an example of model-level explanation technique that utilizes reinforcement learning in order to investigate which patterns maximize a certain prediction on the graph [6]. The graph generator is trained to generate graph patterns by utilizing a feedback of the GNN model using a policy gradient. With the discovered patterns, GNN models can be better understood and even improved [6]. Local-level or instance-level explanations depend on a particular input graph and vary not only in the algorithms employed to generate explanations but also in specific parts of the graph to be explained. GNNExplainer is an example of an instance-level model-agnostic XAI method that returns a subgraph and a subset of node features that are the most influential for a graph prediction [7]. That is achieved by maximizing the mutual information of a subgraph with GNN predictions which produces a graphmask with important subgraphs and a feature-mask that hides unimportant node features. A main advantage of GNNExplainer is that explanations can be tailored to a particular task: node or graph
classification, edge prediction [7].

Some of the Graph XAI techniques are based on the ideas of general XAI techniques, i.e., Shapley values or LIME. In particular, SubgraphX employs Monte Carlo tree search to explore different subgraphs together with Shapley values to measure their importance [8]. Extending its general XAI counterpart called LIME, GraphLIME is another example of instance-level surrogate graph explanations inspired by traditional XAI. It is limited to node classification tasks and returns the importance of node features for a particular node prediction. GraphLIME uses HSIC Lasso model as an interpretable surrogate used to explain a black-box GNN model [9]. The HSIC Lasso model is trained on a subgraph built around the node to be explained, and the weight of the HSIC Lasso model are used as an importance score for features.

Conclusion

In conclusion, XAI is a rapidly growing field of research, particularly for Graph Neural Networks (GNNs) which have become increasingly popular recently. With XAI, GNNs can not only make accurate predictions but also provide interpretable explanations for those predictions, enabling users to gain insights into the reasoning behind the model’s decision-making process and increasing trust and transparency of the models. While there is still much work to be done in developing and refining XAI techniques for GNNs especially for the model-level explanations, the progress made so far is promising.

References

  • [1] Kipf, T. N., & Welling, M. (2016). Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
  • [2] Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., & Bengio, Y. (2017). Graph attention networks. stat, 1050(20), 10-48550.
  • [3] Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang, “DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-Attention Networks”, International Conference on Web Search and Data Mining, WSDM 2020, Houston, TX, February 3-7, 2020
  • [4] Bai, J., Zhu, J., Song, Y., Zhao, L., Hou, Z., Du, R., & Li, H. (2021). A3t-gcn: Attention temporal graph convolutional network for traffic forecasting. ISPRS International Journal of Geo-Information, 10(7), 485.
  • [5] H. Yuan, H. Yu, S. Gui and S. Ji, “Explainability in Graph Neural Networks: A Taxonomic Survey,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, doi: 10.1109/TPAMI.2022.3204236.
  • [6] Yuan, H., Tang, J., Hu, X., & Ji, S. (2020, August). Xgnn: Towards model-level explanations of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 430-438).
  • [7] Ying, Z., Bourgeois, D., You, J., Zitnik, M., & Leskovec, J. (2019). Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32.
  • [8] Yuan, H., Yu, H., Wang, J., Li, K., & Ji, S. (2021, July). On explainability of graph neural networks via subgraph explorations. In International Conference on Machine Learning (pp. 12241-12252). PMLR.
  • [9] Q. Huang, M. Yamada, Y. Tian, D. Singh and Y. Chang, “GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks,” in IEEE Transactions on Knowledge and Data Engineering, 2022, doi: 10.1109/TKDE.2022.3187455.