Engineer friends often ask me: Graph Deep Learning sounds great, but are there any big commercial success stories? Is it being deployed in practical applications? Besides the obvious ones–recommendation systems at Pinterest, Alibaba and Twitter–a slightly nuanced success story is the Transformer architecture, which has taken the NLP industry by storm. Through this post, I want to establish links between Graph Neural Networks (GNNs) and Transformers. I’ll talk about the intuitions behind model architectures in the NLP and GNN communities, make connections using equations and figures, and discuss how we could work together to drive progress.
J. Weyn, D. Durran, and R. Caruana. Journal of Advances in Modeling Earth Systems, 11 (8):
2680--2693(2019)\_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1029/2019MS001705.
N. Kimura, M. Kono, and J. Rekimoto. Proceedings of the 2019 CHI Conference on Human Factors in
Computing Systems, Paper 146, page 1--11. New York, NY, USA, Association for Computing Machinery, (May 2019)
Q. Le, and T. Mikolov. Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, page 1188--1196. Bejing, China, PMLR, (June 2014)
Y. Liu, A. Ganguly, and J. Dy. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, page 3145–3153. New York, NY, USA, ACM, (2020)