Optimizing Across && And ||

bart@cs.uoregon.edu (Barton C. Massey)
Wed, 15 Feb 1995 10:08:00 GMT

          From comp.compilers

Related articles
Are C logical operators beging lowered too soon? cdg@nullstone.com (1995-02-13)
Optimizing Across && And || bart@cs.uoregon.edu (1995-02-15)
Re: Optimizing Across && And || preston@tera.com (1995-02-18)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-19)
Re: Optimizing Across && And || bart@cs.uoregon.edu (1995-02-21)
Re: Optimizing Across && And || whalley@fork.cs.fsu.edu (David Whalley) (1995-02-22)
Re: Optimizing Across && And || bart@cs.uoregon.edu (1995-02-23)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-24)
[12 later articles]
| List of all articles for this month |
Newsgroups: comp.compilers
From: bart@cs.uoregon.edu (Barton C. Massey)
Keywords: C, optimize
Organization: University of Oregon Computer and Information Sciences Dept.
References: 95-02-110
Date: Wed, 15 Feb 1995 10:08:00 GMT

> The logical operators && and || are relatively common in C programs.
> However, analysis shows that many compilers fail to optimize
> expressions that contain logical operators.
[...]
> In other words, many compilers fail to optimize code fragments of
> the form:
> i1 = x && y;
> i2 = x && y;
> and replace it with:
> reg = x && y;
> i1 = reg;
> i2 = reg;
[...]
> Of course, (x && y) is quite different from many other operators due
> to the sequence point, which, in general, requires a branch. However,
> is it possible that compilers are lowering && and || to if-then-else
> too early in the compiler, and thus making it much more difficult
> to perform optimizations which would otherwise be easier to support?


Yes.


> [My guess is that nobody bothers to optimize this, because && and || rarely
> occur outside of a truth value context in if, while, or for. -John]


I was curious: unfortunately, even in a truth-value context
inside an if(), the compilers I just tried mostly lose. I put
together a sequence of simple benchmarks that I expected the
compilers would all optimize perfectly, as a starting point to
see how bad the problem was. I'll publish something more
formal if there's interest, but here's a sketch of my (very)
preliminary results. (Compiler 1 is a popular free C compiler
running on a popular RISC architecture, compilers 2 and 3 are
vendor's proprietary compilers for two RISC architectures. All
were run with the best optimization flags I could figure out
without extensive meditation.)


    benchmark:
if(x && y)
n++;
if(x && y)
n++;
    result:
     Compiler 1: Tests y twice, increments n twice.
Compiler 2, 3: Performs entire test twice.


    benchmark:
if(z || x && y)
n++;
if(x && y)
n++;
    result:
     Compiler 1: Tests y twice, but avoids extra increment
(more or less an accident).
Compiler 2, 3: Performs entire test twice.


    benchmark:
if(x + y < 2 && a[x + y])
n++;
    result:
     Compiler 1: Very good code.
Compiler 2: Loses some constant propagation.
Compiler 3: Performs index sum twice!


    benchmark:
if( x && y || x && z )
n++;
    result:
Compiler 1: Loses a value propagation, but OK.
Compiler 2: Tests x twice!
Compiler 3: Tests x twice!


This last result suggests I must be doing something wrong.
Surely value propagation across basic blocks will clean this
one up?? Comments?


Bart Massey
bart@cs.uoregon.edu


--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.