Yukun Huang

I am a third-year CS PhD student at Duke University, advised by Prof. Bhuwan Dhingra. I develop methods to improve the factuality, reliability, and efficiency of large language models (LLMs), with the goal of making them more trustworthy and practical for real-world use.

Previously, I received my Master’s degree from Columbia University, where I was fortunate to be advised by Prof. Zhou Yu and Prof. Kathleen McKeown. I obtained my Bachelor’s degree from Tsinghua University. I have also spent time doing internships at Amazon AGI and ByteDance.

Research Interests

My research focuses on advancing LLM Agents across several key areas in AI/NLP/ML:

  • Factuality and Knowledge: Improving how LLM Agents search, ground, and learn knowledge to produce accurate and verifiable answers
  • Trustworthiness: Calibrating LLMs’ confidence to align with knowledge boundaries
  • Efficient Inference: Developing algorithms for faster, resource-efficient inference without compromising performance

Recent News

  • Jan 2026: Two papers are accepted to ICLR 2026
  • Aug 2025: Completed my internship at Amazon AGI
  • May 2025: Two papers are accepted to ACL 2025

Recent Publications

DeepFact: Co-Evolving Benchmarks and Agents for Deep Research Factuality
Yukun Huang, Leonardo Ribeiro, Momchil Hardalov, Bhuwan Dhingra, Markus Dreyer, Venkatesh Saligrama
Under Review, 2026
PaperCode Bib

Cite Pretrain: Retrieval-Free Knowledge Attribution for Large Language Models
Yukun Huang, Sanxing Chen, Jian Pei, Manzil Zaheer, Bhuwan Dhingra
ICLR, 2026
Paper CodeBib

When Greedy Wins: Emergent Exploitation Bias in Meta-Bandit LLM Training
Sanxing Chen, Xiaoyin Chen, Yukun Huang, Roy Xie, Bhuwan Dhingra
ICLR, 2026
PaperCode Bib

To Trust or Not to Trust? Enhancing Large Language Models’ Situated Faithfulness to External Contexts
Yukun Huang, Sanxing Chen, Hongyi Cai, Bhuwan Dhingra
ICLR Spotlight, 2025
Paper Code Bib

Real-time Factuality Assessment from Adversarial Feedback
Sanxing Chen, Yukun Huang, Bhuwan Dhingra
ACL, 2025
Paper Code Bib

Fuzzy Speculative Decoding for a Tunable Accuracy-Runtime Tradeoff
Maximilian Holsman, Yukun Huang, Bhuwan Dhingra
ACL Findings, 2025
Paper Code Bib

Calibrating Long-form Generations From Large Language Models
Yukun Huang, Yixin Liu, Raghuveer Thirukovalluru, Arman Cohan, Bhuwan Dhingra
EMNLP Findings, 2024
Paper Code Bib

Atomic Self-Consistency for Better Long Form Generations
Raghuveer Thirukovalluru, Yukun Huang, Bhuwan Dhingra
EMNLP, 2024
Paper Code Bib

Services

Reviewer: ICLR (Notable Reviewer 2025), NeurIPS (Top Reviewer 2025), ACL, EMNLP, ARR

Teaching

  • Teaching Assistant, Large Language Models (CS 590) — Duke University, Fall 2025
  • Teaching Assistant, Probabilistic Machine Learning (STA 561) — Duke University, Spring 2025
  • Teaching Assistant, Natural Language Processing (COMS 4701) — Columbia University, Summer 2022 & Spring 2023
  • Teaching Assistant, Analysis of Algorithms (CSOR 4231) — Columbia University, Spring 2022