Hello👋, here is a person who stumbled into the field of AI by accident and still hasn’t left. He graduated from the Gaoling School of Artificial Intelligence at Renmin University of China with a bachelor’s degree.
He is passionate about hackathons and has secured significant prizes in various competitions organized by prominent Chinese internet companies. Nowadays, he is more involved as a hackathon organizer.
His research interests lie primarily in Natural Language Processing (NLP), with a focus on LLM test-time scaling through approaches including reasoning frameworks, reasoning models (e.g., o1 and R1), and agentic workflow optimization.
He is still seeking PhD opportunities. It’s a long journey, but he never lacks the perseverance to grow through adversity.
🔥 News
- 2024.03.02: 🔥🔥 AoT has ignited widespread discussions on X (380K+ Views)! Take a look at the post.
- 2024.02.11: 🥳🥳 AFlow is accepted by ICLR 2025 as an Oral!
- 2024.06.13: 🎉🎉 My team got the third place in the Alibaba 2024 Global Mathematics Competition AI Challenge! 🥉 ($2000 bonus)
📝 Publications

[ICLR 2025 Oral (1.8%)] AFlow: Automating Agentic Workflow Generation [paper][code][report] (机器之心)
Jiayi Zhang, Jinyu Xiang, Zhaoyang Yu, Fengwei Teng, Xionghui Chen, Jiaqi Chen, Mingchen Zhuge, Xin Cheng, Sirui Hong, Jinlin Wang, Bang Liu, Yuyu Luo, Chenglin Wu
We introduces AFlow, an automated framework that reformulates workflow optimization as a search problem over code-represented workflows, using Monte Carlo Tree Search to efficiently explore and refine workflows through code modification and execution feedback. By leveraging this approach, AFlow achieves superior performance compared to state-of-the-art baselines across multiple benchmarks, while also enabling smaller models to outperform larger ones at a fraction of the cost.

[ARXIV] Atom of Thoughts for Markov LLM Test-Time Scaling [paper][code][post]
[report] (机器之心)
Fengwei Teng, Zhaoyang Yu, Quan Shi, Jiayi Zhang, Chenglin Wu, Yuyu Luo
We introduce Atom of Thoughts (AoT), a novel reasoning framework that transforms complex reasoning processes into a Markov-style sequence of atomic questions. By implementing a two-phase transition mechanism of decomposition and contraction, AoT eliminates the need to maintain historical dependencies during reasoning, allowing models to focus computational resources on the current question state. Experiments across multiple benchmarks demonstrate AoT’s effectiveness both as a standalone framework and as a plug-in enhancement for existing test-time scaling methods.
🎖 Honors and Awards
- 2024.06 Alibaba Global Mathematics Competition AI Challenge - Third Place Award🥉 (3rd out of 563 teams) ($2000)
[code]
- 2023.12 Baidu & FounderPark AGI Hackathon - Second Place Award🥈 (¥10000)
[code]
- 2023.05 The International Mathematical Contest in Modeling (MCM) - Meritorious Award [pdf]
- 2022.12 The Chinese Mathematics Competitions - Second Prize Award
📖 Educations
- 2020.09 - 2024.06 B. Eng in Artificial, Renmin University of China, Gaoling School of Artificial Intelligence Beijing, China
- Graduation thesis recommendation
💬 Invited Talks
I have given two speeches in Chinese about the Alibaba Global Mathematics Competition AI Challenge. Once there is a replay video link, I will update it in a timely manner.
📅 Internships
- 2023.09 - 2024.01
Kwai Technology
- Research Focus: LLM-based Agents; Advanced Data Analysis
- 2023.05 - 2023.07
Deep Space Symphony
- Research Focus: Music-Driven Motion Diffusion; Controllable Generation