Joel Jang

Ph.D. Student at the University of Washington

joeljang@cs.washington.edu

About
Hello! I am a second-year Ph.D. Student at the Allen School at the University of Washington, co-advised by Luke Zettlemoyer and Dieter Fox. I am currently a research intern at Nvidia GEAR Lab working with Scott Reed and Jim Fan. I was previously a research intern at Nvidia Robotics Lab and Ai2. I am currently interested in scaling for robotics; Specifically, I am focused on developing scalable methods for creating foundation models for generalist robots. I obtained an M.S. in Artificial Intelligence at KAIST AI, where I was fortunate to be advised by Minjoon Seo. Before that, I obtained a B.S. in Computer Science at Korea University.
News
Oct 2024     Invited to give a talk at OpenAI for our new preprint, Latent Action Pretraining from Videos.
June 2024     I am part of the organizing committee for the 5th Embodied AI Workshop at CVPR 2024.
January 2024     Our Prometheus paper has been accepted to ICLR 2024!
Octobober 2023     Our CoT Collection paper has been accepted to EMNLP 2023 and RoSPr to EMNLP 2023 Findings!
May 2023     Our Knowledge Unlearning paper and Gradient Ascent Post-training paper has been accepted to ACL 2023 and Prompt Injection to ACL 2023 Findings!
April 2023     Our ELM paper has been accepted to ICML 2023!
January 2023     Our Guess the Instruction paper has been accepted to ICLR 2023!
November 2022     Our CKL paper won the Qualcomm Innovation Fellowship Korea (QIFK) 2022!
October 2022     Our TemporalWiki paper has been accepted to EMNLP 2022!
Jan 2022     Our CKL paper got accepted to ICLR 2022!

Education

University of WashingtonSep. 2023 - Present

Ph.D. Student (advisors: Luke Zettlemoyer and Dieter Fox)

Korea Advanced Institute of Science and Technology (KAIST) Mar. 2021 - Aug. 2023

M.S. in Artificial Intelligence (advisor: Minjoon Seo)

Korea UniversityMar. 2017 - Feb. 2021

B.S. in Computer Science

Publications

2024

Latent Action Pretraining from Videos

Seonghyeon Ye*, Joel Jang*, Byeongguk Jeon, Sejune Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee, Jianfeng Gao, Luke Zettlemoyer, Dieter Fox, Minjoon Seo

Preprint

Semiparametric Token-Sequence Co-Supervision

Hyunji Lee*, Doyoung Kim*, Jihoon Jun, Sejune Joo, Joel Jang, Kyoung-Woon Oh, Minjoon Seo

ACL 2024

LangBridge: Multilingual Reasoning Without Multilingual Supervision

Dongkeun Yoon, Joel Jang, Sungdong Kim, Seungone Kim, Sheikh Shafayat, Minjoon Seo

ACL 2024

Exploring the Practicality of Generative Retrieval on Dynamic Corpora

Chaeeun Kim*, Soyoung Yoon*, Hyunji Lee, Joel Jang, Sohee Yang, Minjoon Seo

EMNLP 2024

How Well Do Large Language Models Truly Ground?

Hyunji Lee*, Sejune Joo*, Chaeeun Kim, Joel Jang, Doyoung Kim, Kyoung-Woon Oh, Minjoon Seo

NAACL 2024

Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging

Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu

NeurIPS 2024 AFM Workshop

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

Seungone Kim*, Jamin Shin*, Yejin Cho*, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo

ICLR 2024

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis

Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo

TACL 2024

2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

Seungone Kim*, Se June Joo*, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

EMNLP 2023

Retrieval of Soft Prompt Enhances Zero-shot Task Generalization

Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo

EMNLP 2023 Findings

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

ACL 2023

Gradient Ascent Post-training Enhances Language Model Generalization

Dongkeun Yoon*, Joel Jang*, Sungdong Kim, Minjoon Seo

ACL 2023 (short)

Prompt Injection: Parameterization of Fixed Inputs

Eunbi Choi, Yongrae Jo, Joel Jang, Minjoon Seo

ACL 2023 Findings

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

ICML 2023

Guess the Instruction! Making Language Models Stronger Zero-Shot Learners

Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo

ICLR 2023

2022

Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts

Joel Jang*, Seonghyeon Ye*, Minjoon Seo

NeurIPS 2022 Workshop on Transfer Learning for NLP (TL4NLP)

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

Joel Jang*, Seonghyeon Ye*, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo

EMNLP 2022

Towards Continual Knowledge Learning of Language Models

Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo

ICLR 2022

2021

Sequential targeting: A continual learning approach for data imbalance in text classification

Joel Jang, Yoonjeon Kim, Kyoungho Choi, Sungho Suh

Expert Systems with Applications (2021)

( * indicates equal contribution )

Vitæ

Full CV in PDF.

  • Nvidia GEAR Lab Oct 2024 - Present
    Research Intern (Mentors: Scott Reed and Jim Fan)
    Scalable methods for creating Foundation Models for Robotics
  • Nvidia Robotics Lab March 2024 - Sep 2024
    Research Intern (Mentors: Ajay Mandlekar and Dieter Fox)
    Scalable methods for creating Foundation Models for Robotics
  • Allen Institute for AI (AI2) June 2023 - Jan 2024
    Research Intern (Mentors: Prithviraj Ammanabrolu, Yuchen Lin, Yejin Choi)
    Personalized RLHF
  • University of Washington Sep. 2023 - Present
    Ph.D. Student (Advisor: Luke Zettlemoyer and Dieter Fox)
    Foundation Models for Generalist AI Robots
  • LG AI Research July 2022 - May 2023
    Research Intern (Mentors: Moontae Lee, Lajanugen Logeswaran)
    Working on developing LMs that can generalize to novel tasks
  • KAIST Language & Knowledge Lab Mar. 2021 - Aug. 2023
    M.S. Student (Advisor: Minjoon Seo)
    Continual Adaptation of LLMs
  • Kakao Brain Dec. 2020 - Feb. 2021
    Research Intern (Mentor: Ildoo Kim)
    Worked on large-scale representation learning
  • NAVER Jul. 2020 - Sept. 2020
    Software Engineer Intern
    Worked on continual learning on hate speech detection
  • KIST Europe Aug. 2019 - Jan. 2020
    Research Intern (Mentor: Sungho Suh)
    Worked on machine prognostics through ML
  • Korea University Mar. 2017 - Feb. 2021
    B.S. in Computer Science