DDG/DAG construction algorithms

mark@hubcap.clemson.edu (Mark Smotherman)
Sat, 20 Oct 90 16:41:57 -0400

          From comp.compilers

Related articles
DDG/DAG construction algorithms mark@hubcap.clemson.edu (1990-10-20)
| List of all articles for this month |
Newsgroups: comp.compilers
From: mark@hubcap.clemson.edu (Mark Smotherman)
Keywords: optimize
Organization: Compilers Central
Date: Sat, 20 Oct 90 16:41:57 -0400

I have a student collecting information on data dependency graph (DDG)
construction. In particular, he is focusing on algorithms to construct
DDGs for basic blocks (i.e. DAGs) for use at the assembly-language- or
machine-language-level in instruction reordering.


So far, he has found:


1. Aho, Sethi, and Ullman, Compilers: Principles, Techniques, and
      Tools. (I.e. the "dragon" book) He has modified their algorithm
      9.2 (in section 9.8).
2. Landskov, Davidson, and Shriver, "Local microcode compaction
      techniques," ACM Computings Surveys, Sept. 1980. Their algorithm
      compares a new node to all leaf nodes and then to parent nodes
      whose leaves were found to be independent.
3. Hennessy and Gross, "Postpass code optimization of pipeline
      constraints," ACM TOPLAS, July 1983. Their DAGs omits write-
      write dependencies.
4. Gibbons and Muchnick, "Efficient instruction scheduling for a
      pipelined architecture," SIGPLAN Symp. on Compiler Const., 1986.
      Their algorithm scans backward over a basic block comparing a new
      node with all previous nodes; however, the backward scan allows
      a special subgraph to be built for carry/borrow-bit def/use so
      that false dependencies do not overly-constrain the available
      parallelism.
5. Smotherman, class handout, Clemson Univ., Spring 1989. Backward
      pass using a definition table and use lists. Encountering a
      definition causes a dependency to be noted with either each
      outstanding use seen thus far or with an existing definition,
      and it causes the use-list to be emptied and the definition noted.
      Encountering a use causes a dependency to be noted with an existing
      definition, and it causes the use to be inserted into the use list.
6. Krishnamurthy, "Static scheduling of multi-cycle operations for a
      RISC processor," M.S. Scholarly Paper, Clemson Univ., May 1990. An
      improvement on Smotherman and operates in a forward pass.


My student is comparing the running time of several of these algorithms
on a SPARC (Sun 4/20) using the .s files produced by the Sun C compiler
for the C versions of Linpack, Whetstone, and Livermore loops:


dep. info.: linpack whetst lloops
  (bb = basic avg.insts./bb 7 11.6 16.6
  block, cc = avg.deps./bb 6.1 11.8 16.7
  cond. code) avg.cc.deps./bb .16 .3 .4


time in secs: linpack whetst lloops
  (to construct aho 2.5 1.3 5.2
  dags for all landskov 2.4 1.1 11.7
  basic blocks) smotherman 1.8 .5 2.4
krishnamurthy 1.6 .5 2.3




Are there other algorithms (published or not) that he should include?
Has the Smotherman/Krishnamurthy approach been used before?


Please email me, and if there is interest I will summarize any
additional information. Thanks!
--
Mark Smotherman, Comp. Sci. Dept., Clemson University, Clemson, SC 29634
INTERNET: mark@hubcap.clemson.edu UUCP: gatech!hubcap!mark
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.