End of optimization...

mayan@sandbridgetech.com
3 Jul 2003 23:27:09 -0400

          From comp.compilers

Related articles
End of optimization... mayan@sandbridgetech.com (2003-07-03)
Re: End of optimization... walter@bytecraft.com (Walter Banks) (2003-07-13)
Re: End of optimization... walter@bytecraft.com (Walter Banks) (2003-07-13)
Re: End of optimization... ndalton@lastminute.com (Niall Dalton) (2003-07-13)
RE: End of optimization... Barak.Zalstein@ParthusCeva.com (Barak Zalstein) (2003-07-17)
Re: End of optimization... walter@bytecraft.com (Walter Banks) (2003-07-17)
Re: End of optimization... mayan@sandbridgetech.com (2003-07-21)
[9 later articles]
| List of all articles for this month |

From: mayan@sandbridgetech.com
Newsgroups: comp.compilers
Date: 3 Jul 2003 23:27:09 -0400
Organization: Posted via Supernews, http://www.supernews.com
Keywords: optimize, practice
Posted-Date: 03 Jul 2003 23:27:09 EDT

I was asked the following question by someone. I answered him, but I
realized after talking to colleagues that this might be of more general
interest.


The question was:
> Something I wanted to ask your advice on is breaking into working on
> compilers. At least recently, the job market has been for more
> senior people - with 5+ years of relevant experience.




Here's the problem - people need new compilers if:
- they have a new processor with new compiler-visible architectural
features OR
- they have a new language, with novel problems
(OK; JIT stuff is somewhat of an exception to this; we can dump this in
the new language category)


There isn't much in the way of new languages being developed.


New architectures, too, are somewhat sparse.
- At the HPC end, there isn't enough work going on to make it
worthwhile. Also, again a lot of the compiler development has already
happened.
- At the work-station end, there are 3 dominant themes - x86, RISC,
itanium. All 3 are pretty much played out in terms of new compilers;
most of the work will be incremental. There may be some work in moving
SSE/Altivec support from HPC compilers down to workstation, but its not
innovative.
- At the embedded microprocessor end, there is ARM and a handful of
other processors. Either the chips are so constrained that they are
programmed in assembly (e.g. PIC), or there exist decent compilers.


The only domains for which decent compilers don't exist are in the area
of application specific processors, e.g. graphics processors, network
processors, digital signal processors. However, in those areas, its
somewhat difficult to justify a new compiler. The arguments run along
the lines:
- we're going to ship millions of these
- we're going to write 1-10 small programs.
- wouldn't it be cheaper to write the stuff in assembler than to first
write a compiler, and then write the code in C?
- if we have to write a compiler, why not port gcc [or whatever], and
then rewrite the performance critical stuff in assembler?


If you have a radical new architecture _AND_ you've decided to write a
compiler for it, clearly you're going to need people who understand how
to develop new optimization techniques AND who know how to implement
them - i.e. senior people.


> Do you prefer to see further academic
> study - say a PhD in a compiler related subject - or experience
> in compiler QA or similar? Or perhaps, if I devoted some time
> to open source compiler hacking?


A Ph.D. would be a step in the right direction; but make sure its a
Ph.D. with LOTS of implementation experience.


Open source is a possibility, but you'd have to do a major module. Part
of the problem there is that gcc's intermediate form is (was?) not
particularily amenable to a large class of optimizations. I don't know
about the SGI Pro64 compiler (or whatever they are calling it these
days), but I would be surprised, given its vintage, if it was any
better.


-------------


According to a colleague of mine, this is borne out somewhat by the
trends at recent conferences - there is not much optimization work being
published. Instead, papers seem to be about areas such as program
verification.


Another thing that struck me is that a lot of the advances that did
happen in the past 10 years are not necessarily in the optimizations
themselves, but in the structures for reasoning about them. Consider,
for instance, the omega test. It doesn't really expose that much more
independence than simple gcd/banerjee, but it is a nice approach for
reasoning about memory dependences.


Also the faster CPUs and larger memories available have allowed
techniques that were previously considered egregiously resource hungry
to become acceptable and move from research labs into mainstream
compilers.



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.