|Optimizing Compilers project ideas firstname.lastname@example.org (1991-10-19)|
|Re: Optimizing Compilers project ideas email@example.com (1991-10-24)|
|Re: Optimizing Compilers project ideas firstname.lastname@example.org (1991-10-26)|
|From:||email@example.com (Robert Firth)|
|Organization:||Software Engineering Institute, Pittsburgh, PA|
|Date:||24 Oct 91 15:47:14 GMT|
In article 91-10-092 firstname.lastname@example.org (Jih-Cheng Liu) writes:
[potential research areas]
> 1. Continue to optimize loops.
> 2. Investigate superscalar optimization.
> 3. Parallelize loops for MPP machines.
Those seem like good areas. In addition, you could look at the way these
optimisations interact. For example, on a uniprocessor it is a good idea
to move loop-invariant expressions out of loops, since the code is then
executed once rather than many times. Even there, compilers make mistakes
by doing this to loops that are executed zero or one times.
But on a multiprocessor, if you can fully parallelize the loop, you don't
necessarily want to move invariant expressions out of it, since that could
lengthen the longest thread and so delay the overall computation.
That points to two places where perhaps more work should be done. First,
better strategies to parallelise code so as to balance the load as well as
possible, ie keep all processor crunching between synchronization points.
Secondly, how do we structure an optimising and parallelizing compler so
that enough information is available to it, and in the right phases or
places, to allow it to resolve such tradeoff issues intelligently?
Hope that helps
Return to the
Search the comp.compilers archives again.