Ruiwen WANG
About Me
I am a third-year Ph.D. candidate in Computer Science, jointly advised at Sorbonne University and EURECOM, in collaboration with Huawei Paris Research Center, and supervised by Prof. Chong Li and Prof. Raja Appuswamy. My research sits at the intersection of High-Performance Computing (HPC) and Machine Learning Systems (MLSys), with a focus on large-scale DNN/LLM training and inference.
My current work explores:
- ⚡ Hybrid parallelism for scalable LLMs (DP/TP/PP/EP/OP/SP/VPP).
- 🧠 Memory and communication optimization to enable efficient use of large GPU/NPU clusters.
- 🛠️ System design for training & inference, targeting higher throughput, lower latency, and better resource utilization.
Publications
- Ruiwen Wang, Philippe Fang, Chong Li, Thibaut Tachon, Raja Appuswamy.
PRISM: Profiling-Free Symbolic Memory-Driven Strategy Planner for Large DNN Model Training.
19nd SupercomputingAsia / International Conference on High Performance Computing in the Asia-Pacific Region (SCA/HPC Asia 2026),
January 26–29, 2026, Osaka, Japan. - Ruiwen Wang, Chong Li, Hongxing Wang, Raja Appuswamy, Yuan Yujie.
ManuMatic: Strategy Injection for Robust Automatic Hybrid Parallelism in Distributed DNN Training.
22nd IFIP International Conference on Network and Parallel Computing (NPC 2025),
November 14–16, 2025, Nha Trang, Vietnam. - Ruiwen Wang, Chong Li, Thibaut Tachon, Raja Appuswamy, Teng Su.
BMPipe: Bubble-Memory Co-optimization Strategy Planner for Very-Large DNN Training.
27th IEEE International Conference on Cluster Computing (CLUSTER 2025),
September 2–5, 2025, Edinburgh, United Kingdom. - Ruiwen Wang, Chong Li, Raja Appuswamy, Yujie Yuan.
H2O: Holistic Hyper-Parameter Optimization for Large-Scale Deep Neural Network Training.
31st International European Conference on Parallel and Distributed Computing (Euro-Par 2025),
August 25–29, 2025, Dresden, Germany.
🏆 Best Poster Award. - Ruiwen Wang, Chong Li, Thibaut Tachon, Raja Appuswamy.
SCOPE — Symbolic Computation-Memory Optimization for Pipeline Efficiency in Ultra-Scale DNN Training.
1st International Workshop on Distributed and Parallel Programming for Extreme-scale AI (DP2E-AI 2025),
June 2–6, 2025, Paris, France.
Experience
- 🔬 Huawei Paris Research Center — Research Engineer (2021–present)
Working on HPC and AI system optimizations for large-scale LLM training and inference. - 🎓 Joint Ph.D. Program (CIFRE) — Sorbonne Université & EURECOM Institute (2023–present)
Doctoral research in HPC/MLSys under academic and industrial supervision.
Education
- Ph.D. in Computer Science — Joint program
Sorbonne University · EURECOM Institute · in collaboration with Huawei Paris Research Center
2023 – 2026 (expected) - M.Sc. in Computer Science
Sorbonne University, Paris, France
2019 – 2022 - B.Sc. in Computer Science
Paris-Saclay University, Paris, France
2016 – 2019
Ruiwen WANG