Machine learning to schedule optimization passes

John R Levine <johnl@taugh.com>
Thu, 29 Aug 2024 12:21:03 -0400

          From comp.compilers

Related articles
Machine learning to schedule optimization passes johnl@taugh.com (John R Levine) (2024-08-29)
Re: Machine learning to schedule optimization passes jonathanchesterfield@gmail.com (Jon Chesterfield) (2024-08-29)
| List of all articles for this month |
From: John R Levine <johnl@taugh.com>
Newsgroups: comp.compilers
Date: Thu, 29 Aug 2024 12:21:03 -0400
Organization: Compilers Central
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="48554"; mail-complaints-to="abuse@iecc.com"
Keywords: optimize, paper
Posted-Date: 29 Aug 2024 14:34:14 EDT

This paper used machine learning to select and order LLVM optimization
passes. Apparently it worked pretty well.




CompilerDream: Learning a Compiler World Model for General Code Optimization


Effective code optimization in compilers is crucial for computer and
software engineering. The success of these optimizations primarily depends
on the selection and ordering of the optimization passes applied to the
code. While most compilers rely on a fixed sequence of optimization
passes, current methods to find the optimal sequence either employ
impractically slow search algorithms or learning methods that struggle to
generalize to code unseen during training. We introduce CompilerDream, a
model-based reinforcement learning approach to general code optimization.
CompilerDream comprises a compiler world model that accurately simulates
the intrinsic properties of optimization passes and an agent trained on
this model to produce effective optimization strategies. By training on a
large-scale program dataset, CompilerDream is equipped to serve as a
general code optimizer across various application scenarios and
source-code languages. Our extensive experiments first highlight
CompilerDream's strong optimization capabilities for autotuning, where it
leads the CompilerGym leaderboard. More importantly, the zero-shot
generalization ability of large-scale trained compiler world model and
agent, excels across diverse datasets, surpassing LLVM's built-in
optimizations and other state-of-the-art methods in both settings of value
prediction and end-to-end code optimization.


Full paper at: https://arxiv.org/abs/2404.16077


Regards,
John Levine, johnl@taugh.com, Taughannock Networks, Trumansburg NY
Please consider the environment before reading this e-mail. https://jl.ly


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.