Hao Lin @ Aliyun

profile      
Hao Lin

R&D Engineer,
Platform of Artificial Intelligence (PAI),
Aliyun Computing Co., Ltd.,
Hangzhou, Zhejiang, China.

[ Biography | Education | Working Experience | Publication and Preprint | Award | Teaching Assistant | Correspondence]

Biography

Currently, I am a R&D engineer in Aliyun Computing Co., Ltd.'s Platform of Artificial Intelligence (PAI). My research interest includes AI infrastructures, especially those for alignment tasks.

Prior to this position, I received Master degree from Department of Computer Science and Technology of Nanjing University in June, 2024. My supervisor was Professor Wu-Jun Li.

Before that, I received B.Sc. degree from Department of Computer Science and Technology of Nanjing University in June, 2021. In the same year, I was admitted to pursue my Master degree without entrance examination.

NOTICE: I'm always on the job market, please feel free to contact me via hao.lin.msc{AT}gmail.com.

Education

Working Experience

Publication and Preprint

UniAP 
  • Hao Lin*, Ke Wu*, Jie Li*, Jun Li, and Wu-Jun Li†: UniAP: Unifying Inter- and Intra-Layer Automatic Parallelism by Mixed Integer Quadratic Programming, 2025 IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR), Award Candidate. [PDF (To be released)]

  • We propose an automatic parallelism framework UniAP. It utilizes MIQP to jointly optimize DP, TP, FSDP, and PP to enhance efficiency in training large models. Experimental results show that UniAP outperforms SOTA by up to 3.80x in throughput and reduces strategy optimization time by up to 107x across five Transformer-based models.

Qwen3 
  • Qwen Team‡: Qwen3 Technical Report. CoRR abs/2505.09388, 2025. [PDF]

  • We present Qwen3 series models, including models of both dense and Mixture-of-Expert (MoE) architectures. In Qwen3, we integrate thinking mode and non-thinking mode into a unified framework. Empirical evaluations demonstrate that Qwen3 achieves state-of-the-art results across diverse benchmarks.

Qwen2.5 
  • Qwen Team‡: Qwen2.5 Technical Report, CoRR abs/2412.15115, 2024. [PDF]

  • We introduce Qwen2.5, a comprehensive series of large language models (LLMs). Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks. Additionally, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, and multimodal models.

(*: equal contribution. †: corresponding author. ‡: contributor)

Award

Teaching Assistant

Correspondence

E-mail Address

baodong.lh{AT}alibaba-inc.com (Business)
hao.lin.msc{AT}gmail.com (Private)



   

Back to Top