Chen Shen (申晨)

Email: jason.sc@alibaba-inc.com / zjushenchen@gmail.com

me.jpg

I am a Senior Algorithm Expert at Alibaba Cloud. I obtained my Ph.D. degree and B.S. degree both from Zhejiang University in 2018 and 2012, respectively.

My research focuses on LLM post-training technologies, spanning both academic research and industrial deployment. My current interests include:

  • Reasoning & Agent: Enhancing model reasoning and agent capabilities through knowledge distillation and Reinforcement Learning (RL).

  • Model Safety: Building LLM safety guardrails to intercept multi-dimensional risks—including content violations, prompt injection, jailbreak attacks, and model hallucinations—ensuring end-to-end security across the lifecycle from AIGC to AI Agent operations.

:sparkles: :smile: We are recruiting self-motivated interns with a strong LLM background. Please feel free to contact me via Email or WeChat.

news

Jan 26, 2026 Three papers are accepted by ICLR 2026, including one ORAL.
Sep 25, 2025 One paper is accepted by NeurIPS 2025.
Sep 04, 2025 Two papers are accepted by EMNLP 2025, including one ORAL.
May 16, 2025 One paper is accepted by ACL 2025.
Jan 23, 2025 Two papers are accepted by ICLR 2025.
Jan 23, 2025 Two papers are accepted by NAACL 2025, including one ORAL.
Sep 26, 2024 One paper is accepted by NeurIPS 2024.
May 17, 2024 One paper is accepted by KDD 2024.
May 02, 2024 One paper is accepted by ICML 2024.

featured projects

selected publications

(*Corresponding Author, †Project Lead)

  1. arXiv
    Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning
    Shaotian Yan, Kaiyuan Liu, Chen Shen*, Bing Wang, Sinan Fan, Jun Zhang, Yue Wu, and 2 more authors
    arXiv preprint arXiv:2601.09088, 2026
  2. ICLR
    Where Did This Sentence Come From? Tracing Provenance in LLM Reasoning Distillation
    Kaiyuan Liu, Shaotian Yan, Rui Miao, Bing Wang, Chen Shen, Jun Zhang, and Jieping Ye
    arXiv preprint arXiv:2512.20908, 2026
  3. ICLR Oral
    Hallucination Begins Where Saliency Drops
    Xiaofeng Zhang, Yuanchao Zhu, Chaochen Gu, Xiaosong Yuan, Qiyan Zhao, Jiawei Cao, Feilong Tang, and 4 more authors
    ICLR 2026 Oral, arXiv preprint arXiv:2601.20279, 2026
  4. ICLR
    Differential Fine-Tuning Large Language Models Towards Better Diverse Reasoning Abilities
    Xiaosong Yuan, Chen Shen, Shaotian Yan, Kaiyuan Liu, Xiaofeng Zhang, Liang Xie, Wenxiao Wang, and 3 more authors
    ICLR 2026, 2026
  5. ICLR
    Don’t Take Things Out of Context: Attention Intervention for Enhancing Chain-of-Thought Reasoning in Large Language Models
    Shaotian Yan, Chen Shen, Wenxiao Wang, Liang Xie, Junjie Liu, and Jieping Ye
    In ICLR 2025, The Thirteenth International Conference on Learning Representations, 2025
  6. ICLR
    Improving Complex Reasoning with Dynamic Prompt Corruption: A Soft Prompt Optimization Approach
    Sinan Fan, Liang Xie, Chen Shen*, Ge Teng, Xiaosong Yuan, Xiaofeng Zhang, Chenxi Huang, and 3 more authors
    In ICLR 2025, The Thirteenth International Conference on Learning Representations, 2025
  7. NeurIPS
    Instance-adaptive Zero-shot Chain-of-Thought Prompting
    Xiaosong Yuan, Chen Shen*, Shaotian Yan, Xiaofeng Zhang, Liang Xie, Wenxiao Wang, Renchu Guan, and 2 more authors
    In Advances in Neural Information Processing Systems, 2024

visitor statistics

Total Visitors --
Unique Visitors --
Total Page Views --