Related articles |
---|
CompilerGPT: Leveraging Large Language Models for Analyzing and Acting on Compiler Optimization Reports johnl@taugh.com (John R Levine) (2025-06-09) |
From: | John R Levine <johnl@taugh.com> |
Newsgroups: | comp.compilers |
Date: | Mon, 09 Jun 2025 09:40:06 +0200 |
Organization: | Compilers Central |
Injection-Info: | gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="74892"; mail-complaints-to="abuse@iecc.com" |
Keywords: | C++, optimize |
Posted-Date: | 09 Jun 2025 03:42:31 EDT |
The authors told LLMs to read C++ compiler optimization reports and make
the code better.
https://arxiv.org/abs/2506.06227
Abstract: Current compiler optimization reports often present complex,
technical information that is difficult for programmers to interpret and
act upon effectively. This paper assesses the capability of large language
models (LLM) to understand compiler optimization reports and automatically
rewrite the code accordingly.
To this end, the paper introduces CompilerGPT, a novel framework that
automates the interaction between compilers, LLMs, and user defined test
and evaluation harness. CompilerGPT's workflow runs several iterations and
reports on the obtained results.
Experiments with two leading LLM models (GPT-4o and Claude Sonnet),
optimization reports from two compilers (Clang and GCC), and five
benchmark codes demonstrate the potential of this approach. Speedups of up
to 6.5x were obtained, though not consistently in every test. This method
holds promise for improving compiler usability and streamlining the
software optimization process.
Regards,
John Levine, johnl@taugh.com, Taughannock Networks, Trumansburg NY
Please consider the environment before reading this e-mail. https://jl.ly
Return to the
comp.compilers page.
Search the
comp.compilers archives again.