Joel Jang

Ph.D. Student at the University of Washington

Hello! I am a first-year Ph.D. Student at the Allen School at the University of Washington, advised by Luke Zettlemoyer. I obtained an M.S. in Artificial Intelligence at KAIST AI, where I was fortunate to be advised by Minjoon Seo. Before that, I obtained a B.S. in Computer Science at Korea University.
Octobober 2023     Our CoT Collection paper has been accepted to EMNLP 2023 and RoSPr to EMNLP 2023 Findings!
May 2023     Our Knowledge Unlearning paper and Gradient Ascent Post-training paper has been accepted to ACL 2023 and Prompt Injection to ACL 2023 Findings!
April 2023     Our ELM paper has been accepted to ICML 2023!
January 2023     Our Guess the Instruction paper has been accepted to ICLR 2023!
November 2022     Our CKL paper won the Qualcomm Innovation Fellowship Korea (QIFK) 2022!
October 2022     Our TemporalWiki paper has been accepted to EMNLP 2022!
Jan 2022     Our CKL paper got accepted to ICLR 2022!


University of WashingtonSep. 2023 - Present

Ph.D. Student (advisor: Luke Zettlemoyer)

Korea Advanced Institute of Science and Technology (KAIST) Mar. 2021 - Aug. 2023

M.S. in Artificial Intelligence (advisor: Minjoon Seo)

Korea UniversityMar. 2017 - Feb. 2021

B.S. in Computer Science


Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging

Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu


Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

Seungone Kim*, Jamin Shin*, Yejin Cho*, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo


Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis

Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo



The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

Seungone Kim*, Se June Joo*, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

EMNLP 2023

Retrieval of Soft Prompt Enhances Zero-shot Task Generalization

Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo

EMNLP 2023 Findings

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

ACL 2023

Gradient Ascent Post-training Enhances Language Model Generalization

Dongkeun Yoon*, Joel Jang*, Sungdong Kim, Minjoon Seo

ACL 2023 (short)

Prompt Injection: Parameterization of Fixed Inputs

Eunbi Choi, Yongrae Jo, Joel Jang, Minjoon Seo

ACL 2023 Findings

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

ICML 2023

Guess the Instruction! Making Language Models Stronger Zero-Shot Learners

Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo

ICLR 2023


Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts

Joel Jang*, Seonghyeon Ye*, Minjoon Seo

Proceedings of Machine Learning Research (PMLR)

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

Joel Jang*, Seonghyeon Ye*, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo

EMNLP 2022

Towards Continual Knowledge Learning of Language Models

Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo

ICLR 2022


Sequential targeting: A continual learning approach for data imbalance in text classification

Joel Jang, Yoonjeon Kim, Kyoungho Choi, Sungho Suh

Expert Systems with Applications (2021)

( * indicates equal contribution )


Full CV in PDF.

  • Allen Institute for AI (AI2) June 2023 - Present
    Research Intern
    Personalized RLHF
  • LG AI Research July 2022 - May 2023
    Research Intern (Mentors: Moontae Lee, Lajanugen Logeswaran)
    Working on developing LMs that can generalize to novel tasks
  • KAIST Language & Knowledge Lab Mar. 2021 - Present
    M.S. Student (Advisor: Minjoon Seo)
    Continual Adaptation of LLMs
  • Kakao Brain Dec. 2020 - Feb. 2021
    Research Intern (Mentor: Ildoo Kim)
    Worked on large-scale representation learning
  • NAVER Jul. 2020 - Sept. 2020
    Software Engineer Intern
    Worked on continual learning on hate speech detection
  • KIST Europe Aug. 2019 - Jan. 2020
    Research Intern (Mentor: Sungho Suh)
    Worked on machine prognostics through ML
  • Korea University Mar. 2017 - Feb. 2021
    B.S. in Computer Science