Hello! I am a research scientist working at the Material intelligence lab in LG AI Research, solving various real-world problems using machine learning, including building an AI-powered autonomous lab system. Before joining LG AI Research, I completed my Ph.D at the MIDaS Lab led by Prof. Won-Yong Shin, Dept. Computer Science & Engineering in Yonsei University, Seoul, South Korea.
During my Ph.D, my primary interest was to find elegant and simple solutions to interesting and important problems on graph data and graph neural networks (GNNs). More specifically, my main research topics have been focused primarily on Explainability in graph neural networks as well as Efficient graph learning. I am still interested in these topics, and I am currently exploring how to apply explainable and efficient graph learning techniques to real-world applications in the material science domain.
I also have published in conferences like AAAI, IJCAI, SIGIR, LoG, and journals like PAMI, PlosOne, as well as serving as a reviewer for top AI/ML conferences such as NeurIPS, AAAI, IJCAI, KDD, WWW, and so on.
I also have some side research interests, including:
- Mechanistic Interpretability: I personally think that this is the future of XAI research. I just started to pick up some papers on this topic and I am trying to understand the concept of mechanistic interpretability. As a means to learn more about this topic, I am currently translating several important articles on mechanistic interpretability into Korean. Check out this Gitbook if you are interested!
- Causal Inference: With the help of Brady Neal’s Introduction to Causal Inference, I am trying to understand the concept of causality to understand how it can impact the explainability of machine learning models.
I am always open to new ideas and collaborations, so feel free to reach out to me!
Here is my formal CV :)
And I also have a personal blog, though it is under some renovation at the moment. Nevertheless, please check it out!
| Nov 10, 2025 | After about 4 months of internship, I officially joined LG AI Research as a Research Scientist! I will be mainly working on building a AI-driven autonomous lab for material discovery. Looking forward to exciting research and collaborations ahead! |
| Jul 20, 2025 | The paper “DeformMLP: Effective Deformation Prediction for Breast Cancer Using Graph Topology-Assisted MLPs” has been accepted for the Digital Twin for Healthcare (DT4H) workshop at MICCAI 2025. This work introduces DeformMLP, a novel approach that leverages node features generated from graph propagation to predict breast cancer deformation. The method demonstrates that we can achieve high efficiency while exploiting the graph structure without the need for graph neural networks (GNNs). Thanks to my co-authors for their contributions, as well as the reviewers for their valuable feedback. |
| Jun 23, 2025 | I started a new position as a research scientist intern at LG AI Research in Seoul, South Korea, from June 2025. I will be working on applying machine learning methods to chemical and material science problems, and I am excited to contribute to the research in this area. I am grateful for the opportunity and look forward to collaborating with the LG AI Research team. |
| Dec 10, 2024 | The paper “Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint” was accepted in AAAI 2025 where I am the first author. Thanks to Prof. Shin for the supervision and to the reviewers! The paper is available here. See you in Philadelphia! |
-
Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint
Yong-Min Shin, Siqing Li, Xin Cao, and Won-Yong Shin
In AAAI, Feb 2025
2025
-
On the Feasibility of Fidelity⁻ for Graph Pruning
Yong-Min Shin, and Won-Yong Shin
In IJCAI Workshop on Explainable Artificial Intelligence (XAI), Aug 2024
2024
-
Turbo-CF: Matrix Decomposition-Free Graph Filtering for Fast Recommendation
Jin-Duk Park, Yong-Min Shin, and Won-Yong Shin
In ACM SIGIR Conference on Research and Development in Information Retrieval, Apr 2024
2024
-
Propagate & Distill: Towards Effective Graph Learners Using Propagation-Embracing MLPs
Yong-Min Shin, and Won-Yong Shin
In The Second Learning on Graphs Conference, Nov 2023
2023
-
PAGE: Prototype-Based Model-Level Explanations for Graph Neural Networks
Yong-Min Shin, Sun-Woo Kim, and Won-Yong Shin
Transactions on Pattern Analysis and Machine Intelligence, Nov 2024
2024
-
Prototype-Based Explanations for Graph Neural Networks (Student Abstract) (selected for oral presentation)
Yong-Min Shin, Sun-Woo Kim, Eun-Bi Yoon, and Won-Yong Shin
In AAAI Conference on Artificial Intelligence, (AAAI), Feb 2022
2022
-
Time-Series Anomaly Detection with Implicit Neural Representation
Kyeong-Joong Jeong, and Yong-Min Shin
arXiv preprint, Feb 2022
2022
-
Edgeless-GNN: Unsupervised Representation Learning for Edgeless Nodes
Yong-Min Shin, Cong Tran, Won-Yong Shin, and Xin Cao
IEEE Transactions on Emerging Topics in Computing, Jul 2022
We study the problem of embedding edgeless nodes such as users who newly enter the underlying network, while using graph neural networks (GNNs) widely studied for effective representation learning of graphs. Our study is motivated by the fact that GNNs cannot be straightforwardly adopted for our problem since message passing to such edgeless nodes having no connections is impossible. To tackle this challenge, we propose Edgeless-GNN, a novel inductive framework that enables GNNs to generate node embeddings even for edgeless nodes through unsupervised learning. Specifically, we start by constructing a proxy graph based on the similarity of node attributes as the GNN's computation graph defined by the underlying network. The known network structure is used to train model parameters, whereas a topology-aware loss function is established such that our model judiciously learns the network structure by encoding positive, negative, and second-order relations between nodes. For the edgeless nodes, we inductively infer embeddings by expanding the computation graph. By evaluating the performance of various downstream machine learning tasks, we empirically demonstrate that Edgeless-GNN exhibits (a) superiority over state-of-the-art inductive network embedding methods for edgeless nodes, (b) effectiveness of our topology-aware loss function, (c) robustness to incomplete node attributes, and (d) a linear scaling with the graph size.
2022
-
Explainable gait recognition with prototyping encoder–decoder
Jucheol Moon, Yong-Min Shin, Jin-Duk Park, Nelson Hebert Minaya, Won-Yong Shin, and Sang-Il Choi
PLOS ONE, Mar 2022
Human gait is a unique behavioral characteristic that can be used to recognize individuals. Collecting gait information widely by the means of wearable devices and recognizing people by the data has become a topic of research. While most prior studies collected gait information using inertial measurement units, we gather the data from 40 people using insoles, including pressure sensors, and precisely identify the gait phases from the long time series using the pressure data. In terms of recognizing people, there have been a few recent studies on neural network-based approaches for solving the open set gait recognition problem using wearable devices. Typically, these approaches determine decision boundaries in the latent space with a limited number of samples. Motivated by the fact that such methods are sensitive to the values of hyper-parameters, as our first contribution, we propose a new network model that is less sensitive to changes in the values using a new prototyping encoder–decoder network architecture. As our second contribution, to overcome the inherent limitations due to the lack of transparency and interpretability of neural networks, we propose a new module that enables us to analyze which part of the input is relevant to the overall recognition performance using explainable tools such as sensitivity analysis (SA) and layer-wise relevance propagation (LRP).
2022
This website is based on the al-folio theme with Atkinson Hyperlegible font.