Nan Zhang (Chinese: 张楠) is a Ph.D. student in College of Information Sciences and Technology at The Pennsylvania State University. He has broad interests in natural language processing (NLP), machine learning, and efficient AI. He is advised by Dr. Rui Zhang and Dr. Prasenjit Mitra.
Previously, he interned at Salesforce AI Research and NEC Labs America. Before joining Penn State, he received his bachelor’s degree from Worcester Polytechnic Institute (WPI) and master’s degree from Georgia Institute of Technology.
He works on proposing generalizable and efficient approaches for both learning algorithms and ML systems. Specifically,
[Oct. 2025] Our benchmarking and interpreation on compressed large reasoning models (LRMs) is online, entitled When Reasoning Meets Compression: Understanding the Effects of LLMs Compression on Large Reasoning Models. We provide analysis on quantized, distilled, and pruned LRMs to decode the effects of compression!
[Sept. 2025] Our paper on creating training data for Process Reward Models (PRMs) is online, entitled Generalizable Process Reward Models via Formally Verified Training Data. Feel free to check it out!
[Apr. 2025] Excited that SiReRAG is accepted by ICLR 2025! My collaborators are presenting it in person during Poster Session 1 (#61 at Hall 3 + Hall 2B). I am happy to discuss research on RAG, LLMs compression, and large reasoning models virtually.
[Apr. 2025] Our benchmarking paper on compressed large reasoning models (LRMs) is online, entitled When Reasoning Meets Compression: Benchmarking Compressed Large Reasoning Models on Complex Reasoning Tasks. We provide detailed analysis on quantized, distilled, and pruned reasoning models!
[Dec. 2024] Our RAG indexing paper on similar and related corpus contents is online, entitled SiReRAG: Indexing Similar and Related Information for Multihop Reasoning. Our paper consistently outperforms current indexing works on multihop datasets!