Presentations
These are the selected presentation slides that I have used in internal reading groups, workshops, and conferences.
External presentations
On interpretation of GATs based on attention
This presentation was given before my short visit to UNSW Sydney, Australia, and it was a great opportunity to share my research (currently under review).
Towards MLPs as effective graph learning
This presentation was given at the A3 workshop on 2023.02.21, and I was awarded the best presentation award.
PAGE: Prototype-Based Model-Level Explanations for Graph Neural Networks
This presentation was given in the IITP project meeting on 2022.05.20. The presentation introduces the PAGE framework, which at the time the expanded version of the paper was under review.
Also, this is the poster that I presented for the short version, accepted at AAAI 2022.
Edgeless-GNN: Unsupervised Inductive Edgeless Network Embedding
This presentation was given during the 28th Samsung HumanTech award, which eventually led to the award of the bronze prize.
Internal reading groups
Selected slides from the internal reading groups that I have presented. Preparing for these presentations has been a great way to learn about new topics and to keep up with the latest research with a variety of topics.
Graph learning and Graph neural networks
- Introduction to graph mining and graph neural networks: My introductory-level presentation on the general field of graph learning and graph neural networks.
- Industry use cases of GNN-based recommender systems: A survey of the industry use cases of graph neural networks (as recommender systems).
- Accelerative GNNs with MLPs: This was a presentation made when I was focused on developing different approaches to perform more efficient graph learning.
- On the representational power of graph neural networks: This was a presentation made during when I was trying to understand the research on the representational power of graph neural networks.
- A review of the paper: Message passing all the way up by Petar Veličković. I had such fun reading and expanding on this position paper.
- On simple graph neural networks: This was an early presentation made when I was interested in developing simple graph learning frameworks.
Explainable AI and Interpretable machine learning
- How much can we analyze attention? and Towards ``faithful” attention were presentations made when I was interested in exploring the interpretability of attention mechanisms in deep learning models. The effort eventually led to the research on interpreting GAT models with attention, which is currently under review.
Adversarial attacks & defenses
- Adversarial attacks in graphs and its defense: This was a presentation made when I was studying the literature on adversarial attacks and defenses in graph neural networks.
Causal learning
This was a two-part presentation where I introduced the basics of causal learning, which I got a LOT of help from “Introduction to Causal Inference” by Brady Neal. Thank you!!
Self-supervised learning
- Introduction to SimCLR: This was a presentation made when I was interested in understanding the SimCLR framework.
Knowledge distillation
- Towards understanding knowledge distillation: This was a presentation made when I was interested in understanding knowledge distillation, which I eventually applied to my research on efficient graph learning.
Physics-informed machine learning (PIML)
- A review of the paper: Vector Neurons: A General Framework for SO(3)-Equivariant Networks by Dent et al. This was a presentation made when I was interested in understanding the physics-informed machine learning landscape.
Unsupervised disentanglement
- A review of the paper: Challenging common assumptions in the unsupervised learning of disentangled representations by Locatello et al.