Re: CACM article (Feb 2009): "Compiler research: the next 50 years"

Max Hailperin <max@gustavus.edu>
Sat, 14 Feb 2009 07:28:54 -0600

          From comp.compilers

Related articles
CACM article (Feb 2009): "Compiler research: the next 50 years" idbaxter@semdesigns.com (Ira Baxter) (2009-02-10)
Re: CACM article (Feb 2009): "Compiler research: the next 50 years" max@gustavus.edu (Max Hailperin) (2009-02-11)
Re: CACM article (Feb 2009): "Compiler research: the next 50 years" joevans@gmail.com (Jason Evans) (2009-02-12)
Re: CACM article (Feb 2009): "Compiler research: the next 50 years" max@gustavus.edu (Max Hailperin) (2009-02-14)
Re: CACM article (Feb 2009): "Compiler research: the next 50 years" idbaxter@semdesigns.com (Ira Baxter) (2009-02-14)
Re: CACM article (Feb 2009): "Compiler research: the next 50 years" cfc@shell01.TheWorld.com (Chris F Clark) (2009-02-14)
Re: CACM article (Feb 2009): "Compiler research: the next 50 years" gneuner2@comcast.net (George Neuner) (2009-02-14)
| List of all articles for this month |
From: Max Hailperin <max@gustavus.edu>
Newsgroups: comp.compilers
Date: Sat, 14 Feb 2009 07:28:54 -0600
Organization: Compilers Central
References: 09-02-027 09-02-034 09-02-045
Keywords: journal, practice
Posted-Date: 14 Feb 2009 16:47:33 EST

Jason Evans <joevans@gmail.com> writes:
....
> As far as the compiler research article is concerned, I have mixed
> feelings about its message. The basic tenet is that we should be
> working together on large systems, and stop wasting so much time
> implementing the same infrastructure over and over. The problem with
> that is similar to the problem with object-oriented programming and
> class reuse: ...


Actually, the argument in favor of shared infrastructure is not just
about avoiding wasted time, and so is a more solid argument --
although the CACM article did not make this clear.


In any kind of systems research, where results are typically of the
form "incorporating technique X boosts performance by Y percent,"
there is a great value in using a common base system as the starting
point and a common benchmark suite and methodology for the
measurement, even if doing so were to make the research more labor
intensive rather than less. The reason is that it makes the research
results comparable, replicable, and composable.


Comparability and replicability are relatively obvious, so let me
comment on composability. Without a common framework, we end up with
a ton of papers that show 10% performance improvement, but only rarely
anyone managing to show a 21% performance improvement (1.1 squared) by
using two of the techniques together. And we have no idea whether
that is because the optimization techniques are inherently drawing on
the same root source of optimizability and hence will never be able to
usefully combine, or whether it is because no researcher has ever
managed to bring the two techniques together -- or at least, to bring
them together with each a faithful replication of its originally
published version. By contrast, with a common infrastructure, we can
have papers published that show that incorporating technique Z
provides a further performance boost of U percent when added on top of
technique X.


-max



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.