Joel Jang

Ph.D. Student at the University of Washington

joeljang@cs.washington.edu

About
Hello! I am a first-year Ph.D. Student at the Allen School at the University of Washington, co-advised by Luke Zettlemoyer and Dieter Fox. I also work closely with Yejin Choi and Abhishek Gupta. I am a research intern at Nvidia Robotics Lab and was previously a research intern at Ai2. I am currently interested in developing foundational models for generalist AI robots. I obtained an M.S. in Artificial Intelligence at KAIST AI, where I was fortunate to be advised by Minjoon Seo. Before that, I obtained a B.S. in Computer Science at Korea University.
News
January 2024     Our Prometheus paper has been accepted to ICLR 2024!
Octobober 2023     Our CoT Collection paper has been accepted to EMNLP 2023 and RoSPr to EMNLP 2023 Findings!
May 2023     Our Knowledge Unlearning paper and Gradient Ascent Post-training paper has been accepted to ACL 2023 and Prompt Injection to ACL 2023 Findings!
April 2023     Our ELM paper has been accepted to ICML 2023!
January 2023     Our Guess the Instruction paper has been accepted to ICLR 2023!
November 2022     Our CKL paper won the Qualcomm Innovation Fellowship Korea (QIFK) 2022!
October 2022     Our TemporalWiki paper has been accepted to EMNLP 2022!
Jan 2022     Our CKL paper got accepted to ICLR 2022!

Education

University of WashingtonSep. 2023 - Present

Ph.D. Student (advisors: Luke Zettlemoyer and Dieter Fox)

Korea Advanced Institute of Science and Technology (KAIST) Mar. 2021 - Aug. 2023

M.S. in Artificial Intelligence (advisor: Minjoon Seo)

Korea UniversityMar. 2017 - Feb. 2021

B.S. in Computer Science

Publications

Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging

Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu

Preprint

2024

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

Seungone Kim*, Jamin Shin*, Yejin Cho*, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo

ICLR 2024

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis

Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo

TACL 2024

2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

Seungone Kim*, Se June Joo*, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

EMNLP 2023

Retrieval of Soft Prompt Enhances Zero-shot Task Generalization

Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo

EMNLP 2023 Findings

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

ACL 2023

Gradient Ascent Post-training Enhances Language Model Generalization

Dongkeun Yoon*, Joel Jang*, Sungdong Kim, Minjoon Seo

ACL 2023 (short)

Prompt Injection: Parameterization of Fixed Inputs

Eunbi Choi, Yongrae Jo, Joel Jang, Minjoon Seo

ACL 2023 Findings

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

ICML 2023

Guess the Instruction! Making Language Models Stronger Zero-Shot Learners

Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo

ICLR 2023

2022

Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts

Joel Jang*, Seonghyeon Ye*, Minjoon Seo

NeurIPS 2022 Workshop on Transfer Learning for NLP (TL4NLP)

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

Joel Jang*, Seonghyeon Ye*, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo

EMNLP 2022

Towards Continual Knowledge Learning of Language Models

Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo

ICLR 2022

2021

Sequential targeting: A continual learning approach for data imbalance in text classification

Joel Jang, Yoonjeon Kim, Kyoungho Choi, Sungho Suh

Expert Systems with Applications (2021)

( * indicates equal contribution )

Vitæ

Full CV in PDF.

  • Nvidia Robotics Lab March 2024 - Present
    Research Intern (Mentors: Dieter Fox)
    Democratizing Foundational Models for Robotics
  • Allen Institute for AI (AI2) June 2023 - Jan 2024
    Research Intern (Mentors: Prithviraj Ammanabrolu, Yuchen Lin, Yejin Choi)
    Personalized RLHF
  • University of Washington Sep. 2023 - Present
    Ph.D. Student (Advisor: Luke Zettlemoyer and Dieter Fox)
    Foundational Models for Generalist AI Robots
  • LG AI Research July 2022 - May 2023
    Research Intern (Mentors: Moontae Lee, Lajanugen Logeswaran)
    Working on developing LMs that can generalize to novel tasks
  • KAIST Language & Knowledge Lab Mar. 2021 - Aug. 2023
    M.S. Student (Advisor: Minjoon Seo)
    Continual Adaptation of LLMs
  • Kakao Brain Dec. 2020 - Feb. 2021
    Research Intern (Mentor: Ildoo Kim)
    Worked on large-scale representation learning
  • NAVER Jul. 2020 - Sept. 2020
    Software Engineer Intern
    Worked on continual learning on hate speech detection
  • KIST Europe Aug. 2019 - Jan. 2020
    Research Intern (Mentor: Sungho Suh)
    Worked on machine prognostics through ML
  • Korea University Mar. 2017 - Feb. 2021
    B.S. in Computer Science