Re: Algorithm Optimization

"Rick C. Hodgin" <rick.c.hodgin@gmail.com>
Tue, 15 Sep 2020 00:42:03 -0400

          From comp.compilers

Related articles
Algorithm Optimization rick.c.hodgin@gmail.com (Rick C. Hodgin) (2020-09-13)
Re: Algorithm Optimization elronnd@elronnd.net (Elijah Stone) (2020-09-14)
Re: Algorithm Optimization rick.c.hodgin@gmail.com (Rick C. Hodgin) (2020-09-15)
Re: Algorithm Optimization derek@_NOSPAM_knosof.co.uk (Derek M. Jones) (2020-09-15)
Re: Algorithm Optimization gah4@u.washington.edu (gah4) (2020-09-15)
Re: Algorithm Optimization mwmarkland@gmail.com (mwmarkland@gmail.com) (2020-09-16)
Re: Algorithm Optimization rick.c.hodgin@gmail.com (Rick C. Hodgin) (2020-09-16)
Re: Algorithm Optimization derek@_NOSPAM_knosof.co.uk (Derek M. Jones) (2020-09-16)
Re: Algorithm Optimization gah4@u.washington.edu (gah4) (2020-09-16)
[7 later articles]
| List of all articles for this month |
From: "Rick C. Hodgin" <rick.c.hodgin@gmail.com>
Newsgroups: comp.compilers
Date: Tue, 15 Sep 2020 00:42:03 -0400
Organization: Liberty Software Foundation
References: 20-09-032 20-09-033
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="89285"; mail-complaints-to="abuse@iecc.com"
Keywords: optimize
Posted-Date: 15 Sep 2020 22:23:41 EDT
In-Reply-To: 20-09-033

On 9/14/20 11:47 PM, Elijah Stone wrote:
> On Sun, 13 Sep 2020, Rick C. Hodgin wrote:
>> I've been pursuing the idea of what I call algorithm optimization.
>> It's the idea that algorithms coded by individuals may not be optimal,
>> and may require refactoring / re-engineering to be made optimal based on
>> what's trying to be achieved.
>
> I agree with John: generally, it should be the programmer's job to choose
> the right algorithm. Otherwise, the compiler just becomes an algorithm
> library, and--well, if you're going to build an algorithm library, why not
> just expose it directly to your language's users?


The goal is to figure out a way to build a compiler which can determine
what's happening and why, and then look for ways to tweak and improve
what it's doing, and without resorting to a static algorithm library.


There are certain types of data access, and there are patterns of access
within. There are certain types of logic flow.


These things all factor in to a finite set of abilities the compiler
would need to know about to be able to fully express an algorithm.
Abstract Syntax Trees and their cousins do this today.


The step I'd like to work on is the way to take that expression and
analyze it, to break it down into another more fundamental layer which
can be analyzed and reorganized altering it yet without breaking the
final output result, and to finally reassemble it back into the same
original expression form again, but with the new changes that were made.


By looking at various sections of code, by looking for a flow analysis
with data for what happens in what function, where and ultimately why,
things could be moved from here to there, shifting the various burdens
around to address the fundamental needs.


A human can do this by studying an algorithm and looking for ways to
improve the code. I'd like to try to come up with a way for a compiler
to do this as well (albeit more mechanically, less capably at first, but
to be able to contribute something substantial to the world of
optimization).


> The main problem is that algorithms are /heavily/ dependant on data
> structures. So if you want to change an algorithm significantly, you'll
> need to change its data structures, and usually the compiler can't tell if
> external code is going to look at those data structures.
> In the case of your example, though it's hard to tell, I expect that the
> optimal structure would be a freelist, but because 'first' is a global
> variable, modifications to it have to be the same with and without
> optimizations. (You might be able to get around this by making 'first' a
> static variable local to iAlloc_new_sdata; but this approach doesn't
> scale, and it's already putting pressure on the programmer's algorithms,
> which is what you were trying to avoid.)
>
> A secondary problem is compile time: recognizing algorithms is likely to
> be very expensive, which is not a great user experience. (Though llvm/gcc
> do recognize some things, like algorithm for triangle numbers.)


I grew up on Microsoft compilers. They have two default modes: Debug
and Release. Debug mode emits unoptimized code that works, and Release
mode applies various types of optimizations.


I think that approach would work for developers. It doesn't really
matter how long it takes to compile something that's optimized once you
get it working. You could even spin it in a week if that's what it
took, because that version you have that does X, Y, or Z, could be
released in its Debug, or Release-1 state (traditional optimizations
like we see in GCC or whatever today), and then let it bake for that
week while it goes into a Release-2 state.


Release-2 would be intensive, but if it ultimately finds 35% better
performance ... it would be worth it.


--
Rick C. Hodgin


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.