Re: commutative operations in bison for parallel operation

"Anton Ertl" <anton@mips.complang.tuwien.ac.at>
20 Oct 2002 22:47:32 -0400

          From comp.compilers

Related articles
commutative operations in bison for parallel operation surgeonde@yahoo.de (Hannes liesen) (2002-10-13)
Re: commutative operations in bison for parallel operation anton@mips.complang.tuwien.ac.at (Anton Ertl) (2002-10-20)
| List of all articles for this month |
From: "Anton Ertl" <anton@mips.complang.tuwien.ac.at>
Newsgroups: comp.compilers
Date: 20 Oct 2002 22:47:32 -0400
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
References: 02-10-037
Keywords: optimize, yacc
Posted-Date: 20 Oct 2002 22:47:32 EDT

"Hannes liesen" <surgeonde@yahoo.de> writes:
>If I feed the parser with 1+2+3+4+5
>the parser stack shows me (rpn-like)
>
>1 2 + 3 + 4 + 5 +
>
>so each operation waits for the completion of the previous operation.
>We cannot parallelize this. I can remember from priamry scholl, that
>addition is commutative. How can I tell the bison parser generator
>about it, that he arranges the stack like this.
>
>1 2 + 3 4 + +
>
>I could calc
>
>1 2 +
>
>and
>
>3 4 +
>
>in parallel then.


Such a rearrangement makes use of the associative law, not the
commutative law (also, "5 +" is missing).


There are some publications on using the associative and/or
distributive laws for reducing the tree-height of computations, thus
increasing parallelism; the treatment by Kuck [kuck78] is
general/theoretical, probably with an intended application in hardware
design, the papers by Schlansker et al. are on applications in
compilers for instruction-level parallelism.


As for parser generators, the parse trees (or, equivalently, postfix
expressions) produced for such expressions are normally unbalanced,
and there is no obvious way to get balanced parse trees directly; if
you use an explicit parse tree, you could rebalance it during building
or after the expression is complete.


However, such long expressions are relatively rare in most code, and
balancing is only certain to be beneficial if all operands are
available at the same time. If they become available one by one,
using an unbalanced evaluation tree may be better; however, the
default parse tree is usually not the optimal one for such
evaluations.


@Book{kuck78,
    author = "David J. Kuck",
    title = "The Structure of Computers and Computations",
    publisher = "John Wiley \& Sons",
    year = "1978",
    volume = "1",
    annote = "A textbook on computer hardware and architecture.
Contains some interesting things that are now
reappearing (e.g. a chapter on tree-height
reduction)."
}


@InProceedings{schlansker+94,
    author = "Michael Schlansker and Vinod Kathail and Sadun Anik",
    title = "Height Reduction of Control Recurrences for {ILP} Processors",
    crossref = "micro94",
    pages = "40--51",
    annote = "Height reduction is applied to recurrences on which
branches (in particular loop exit branches) depend."
}


@TechReport{schlansker&kathail93,
    author = "Michael Schlansker and Vinod Kathail",
    title = "Acceleration of Algebraic Recurrences on Processors
with Instruction Level Parallelism",
    institution = "HP Laboratories",
    year = "1993",
    type = "technical report",
    number = "HPL-93-55",
    note = "A shorter version appeared in \cite{bannerjee94}.",
    annote = "The associative and distributive laws are applied
to reduce recurrence (cyclic data flow paths)
heights in (DO) loops. The basic idea is to
replace some references to loop-variant variables
with the expression assigned to that variable. The
resulting big expressions can then be transformed to
minimize the critical path length, usually in a way
requiring more resources. This paper introduces
blocked back-substitution: The loop is unrolled
several times, and only the loop-carried copies of
the variables are back-substituted, the others are
computed using the slow, but resource-saving
method. The paper explains how to apply blocked
back-substitution to first-order and higher-order
recurrences and gives formulae for the resulting
recurrence path length and the needed resources. For
first-order recurrences blocked-back-substitution
works well, allowing the exploitation of unlimited
parallelism (assuming infinite loop trip counts)
while increasing the operation count just by a
constant factor."
}


- anton
--
M. Anton Ertl
anton@mips.complang.tuwien.ac.at
http://www.complang.tuwien.ac.at/anton/home.html


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.