Presentations
These are the selected presentation slides that I have used in internal reading groups, workshops, and conferences.
External Presentations (Selected)
[Seminar series] @GIST
This is a series of seminars that I gave as a guest speaker in Prof. Hong Kook Kim’s group at Gwangju Institute of Science and Technology (GIST) from March ~ May 2025. The seminar series covers the basics of graph learning, graph neural networks, several fundamental concepts, and applications.
Session 1: Introduction to graph mining and graph neural networks
Session 2: On the representational power of graph neural networks
Session 3: A graph signal processing viewpoint of graph neural networks
Session 4: From label propagation to graph neural networks
Session 5: On the problem of oversmoothing and oversquashing
Session 6: Towards efficient graph learning
Session 7: Explainable graph neural networks
[Seminar] @Ehwa Womans University
This is a seminar that I gave at Ehwa Womans University as a guest lecturer in April 2025.
Slides: A practical introduction to (explainable) graph learning
[AAAI’25] Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint
This is the poster for my paper “Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint” that was accepted at AAAI 2025. The arxiv version of the paper can be found here.
[IJCAI’24 workshop on XAI] On the Feasibility of Fidelity- for Graph Pruning
This is the poster and presentation slides for my paper “On the Feasibility of Fidelity- for Graph Pruning” presented at the Workshop on Explainable Artificial Intelligence (XAI) at the 2024 International Joint Conference on Artificial Intelligence (IJCAI). The arxiv version of the paper can be found here.
Poster: On the Feasibility of Fidelity- for Graph Pruning
Slide: On the Feasibility of Fidelity- for Graph Pruning
And also here is the accompanying blog post: On the Feasibility of Fidelity- for Graph Pruning.
[Best academic paper award] PAGE: Prototype-Based Model-Level Explanations for Graph Neural Networks
This presentation slides were submitted as an official supplementary material after being accepted as the best academic paper at the Graduate School of Yonsei University.
Original paper: Shin et al., “PAGE: Prototype-Based Model-Level Explanations for Graph Neural Networks”, TPAMI (2024)
Slide: PAGE: Prototype-Based Model-Level Explanations for Graph Neural Networks
Also, this is the poster that I presented for the short version, accepted at AAAI 2022.
Poster: PAGE (AAAI 2022)
[Samsung HumanTech] Edgeless-GNN: Unsupervised Inductive Edgeless Network Embedding
This presentation was given during the competition for the 28th Samsung HumanTech Award (Bronze prize).
Original paper: Shin et al,. “Edgeless-GNN: Unsupervised Representation Learning for Edgeless Nodes”, TETC (2022)
Slide: Edgeless-GNN: Unsupervised Inductive Edgeless Network Embedding
Selected Internal Presentations
In my lab, we have a weekly reading group where we present papers of our own interest. Here are some of the selected presentations that I have made in the reading group.
Topic: Causal learning
This was a two-part presentation where I introduced the basics of causal learning in an internal seminar at my lab, which I got a LOT of help from “Introduction to Causal Inference” by Brady Neal. Definitely check out his course if you are interested in causal inference, I personally had a lot of fun learning from it.
Slide 1: Causal learning: Part 1
Slide 2: Causal learning: Part 2
Topic: Self-supervised learning
This was a presentation made when I was interested in understanding the SimCLR framework. Understanding this paper made me understand other papers related in graph self-supervised learning much better.
Slide: Introduction to SimCLR
Topic: Knowledge distillation
This was a presentation made when I was interested in understanding knowledge distillation, which I eventually applied to my research on GNN-to-MLP knowledge distillation.
Slide: Towards understanding knowledge distillation
Topic: Physics-informed machine learning (PIML)
This was a presentation made when I was interested in understanding the physics-informed machine learning landscape.
Slide: Vector Neurons: A General Framework for SO(3)-Equivariant Networks
Topic: Unsupervised disentanglement
A review of the paper: Challenging common assumptions in the unsupervised learning of disentangled representations by Locatello et al. This paper is a great read if you are interested in disentangled representation learning, and personally, I found it very helpful while reading some papers in explainable AI.
Slide: Challenging common assumptions in the unsupervised learning of disentangled representations