Dongryeol Lee

Ph.D. Student, Machine Intelligence Lab, Seoul National University ECE

prof_pic.jpg

Hi! I am a Ph.D. student at Seoul National University ECE. I am fortunate to be advised by Prof. Kyomin Jung.

My research interest is broadly in machine learning and natural language processing. I am particularly interested in Question Answering and fair evaluation metrics for large language models.


Education

SNU
Seoul National University
Ph.D. in Electrical and Computer Engineering (2021 – present)
Advisor: Prof. Kyomin Jung
SNU
Seoul National University
B.S. in Naval Architecture and Ocean Engineering (2014 – 2021)

Work Experience

Amazon
Amazon
Applied Scientist Intern (Sept 2025 – Feb 2026)
Host: Saab Mansour

news

Dec, 2025 Paper accepted at EACL 2026 (Findings): Don’t Judge Code by Its Cover: Exploring Biases in LLM Judges for Code Evaluation
Nov, 2025 Paper accepted at NeurIPS 2025: Program Synthesis via Test-Time Transduction
Oct, 2025 Two papers accepted at EMNLP 2025: Fooling the LVLM Judges (main) and Can You Trick the Grader? (Findings)
Aug, 2025 I have joined Amazon as an Applied Scientist Intern.
May, 2025 Four papers accepted at NAACL 2025 (2 Oral, 2 Findings): EMBER, MoC, VLind-Bench, and Summary-Guided Decoding
Dec, 2024 Paper accepted at COLING 2025 (Oral): Return of EM: Entity-driven Answer Set Expansion for QA Evaluation

selected publications

  1. EACL
    Don’t Judge Code by Its Cover: Exploring Biases in LLM Judges for Code Evaluation
    Jiwon Moon*, Yerin Hwang*, Dongryeol Lee, Taegwan Kang, Yongil Kim, and Kyomin Jung
    In Findings of the Association for Computational Linguistics: EACL 2026, 2026
  2. NeurIPS
    Program Synthesis via Test-Time Transduction
    Kang-il Lee, Jahyun Koo, Seunghyun Yoon, Minbeom Kim, Hyukhun Koh, Dongryeol Lee, and Kyomin Jung
    In Advances in Neural Information Processing Systems (NeurIPS), 2025
  3. EMNLP
    Fooling the LVLM Judges: Visual Biases in LVLM-Based Evaluation
    Yerin Hwang*, Dongryeol Lee*, Kyungmin Min, Taegwan Kang, Yongil Kim, and Kyomin Jung
    In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2025
  4. EMNLP
    Can You Trick the Grader? Adversarial Persuasion of LLM Judges
    Yerin Hwang, Dongryeol Lee, Taegwan Kang, Yongil Kim, and Kyomin Jung
    In Findings of the Association for Computational Linguistics: EMNLP 2025, 2025
  5. NAACL Oral
    Are LLM-Judges Robust to Expressions of Uncertainty? Investigating the effect of Epistemic Markers on LLM-based Evaluation
    Dongryeol Lee*, Yerin Hwang*, Yongil Kim, Joonsuk Park, and Kyomin Jung
    In Proceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the ACL (NAACL), 2025
  6. NAACL Oral
    Generating Diverse Hypotheses for Inductive Reasoning
    Kang-il Lee, Hyukhun Koh, Dongryeol Lee, Seunghyun Yoon, Minsung Kim, and Kyomin Jung
    In Proceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the ACL (NAACL), 2025
  7. NAACL
    VLind-Bench: Measuring Language Priors in Large Vision-Language Models
    Kang-il Lee, Minbeom Kim, Seunghyun Yoon, Minsung Kim, Dongryeol Lee, Hyukhun Koh, and Kyomin Jung
    In Findings of the Association for Computational Linguistics: NAACL 2025, 2025
  8. NAACL
    Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided Decoding
    Kyungmin Min, Minbeom Kim, Kang-il Lee, Dongryeol Lee, and Kyomin Jung
    In Findings of the Association for Computational Linguistics: NAACL 2025, 2025
  9. COLING Oral
    Return of EM: Entity-driven Answer Set Expansion for QA Evaluation
    Dongryeol Lee, Minwoo Lee, Kyungmin Min, Joonsuk Park, and Kyomin Jung
    In Proceedings of the 31st International Conference on Computational Linguistics (COLING), 2025
  10. EMNLP
    Asking Clarification Questions to Handle Ambiguity in Open-Domain QA
    Dongryeol Lee*, Segwang Kim*, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, and Kyomin Jung
    In Findings of the Association for Computational Linguistics: EMNLP 2023, 2023