From: | Charles Fiterman <cef@geodesic.com> |
Newsgroups: | comp.compilers,comp.lang.asm.x86 |
Date: | 13 Jul 1997 11:35:05 -0400 |
Organization: | Geodesic Systems |
References: | 97-06-071 97-07-023 97-07-035 97-07-047 |
Keywords: | assembler, performance |
Randy Hyde <rhyde@cs.ucr.edu> wrote:
>>>>>>>
>While my memory concerning the analysis may be off, NP-Completeness
>suggests that the best solution known today runs in O(2**n) time (pick
>your n). SOs are useless for all but the most trivial of programs. I
>did a simple computation about five years ago and figured out that a
>typical "end of quarter" assignment I give in UCR's intro to
>programming course would take longer than the known lifetime of the
>Universe to optimize. Even the "Meaning of Life" took less time that
>this to compute (at least, if you believe Douglas Adams :-).
>
>please leave the SO out of the assembly vs. compilers arguments.
>Unless we prove P=NP, SOs aren't even in the running.
First SO technology hasn't stopped at undirected search so O(2**n) may
not be a permanent state of the world.
Second SO technology has been used to improve compiler instruction
tables. A lot of trivial programs.
One common job for assembler programmers is to optimize some tiny part
of a program that is very hot. The scan loop of a conservative
collector may be the hottest section of code in the world being used
by millions of programs for a substantial part of their total running
time. This is also of a good scale for an SO.
An SO could run on the problem for a week or two or even a year to
good effect. I've done similar things with Perfect hashing. Perfect
hashing is another case where a compiler can build better code than a
person ever would. I let one spend two days building an instruction
table for an assembler.
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.