Re: Optimizing Across && And ||

preston@tera.com (Preston Briggs)
Wed, 8 Mar 1995 00:09:45 GMT

          From comp.compilers

Related articles
[6 earlier articles]
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-24)
Re: Optimizing Across && And || glew@ichips.intel.com (1995-02-27)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-28)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-28)
Re: Optimizing Across && And || ryg@summit.novell.com (1995-03-03)
Re: Optimizing Across && And || leichter@zodiac.rutgers.edu (1995-03-07)
Re: Optimizing Across && And || preston@tera.com (1995-03-08)
Re: Optimizing Across && And || pardo@cs.washington.edu (1995-03-13)
Re: Optimizing Across && And || chase@centerline.com (1995-03-14)
Re: Optimizing Across && And || jqb@netcom.com (1995-03-15)
Re: Optimizing Across && And || cdg@nullstone.com (1995-03-20)
Re: Optimizing Across && And || daniels@cse.ogi.edu (1995-03-21)
Re: Optimizing Across && And || bill@amber.ssd.hcsc.com (1995-04-03)
| List of all articles for this month |

Newsgroups: comp.compilers
From: preston@tera.com (Preston Briggs)
Keywords: C, optimize
Organization: Compilers Central
References: 95-02-179 95-03-028
Date: Wed, 8 Mar 1995 00:09:45 GMT

ryg@summit.novell.com writes:
>That's that "total quality" type approach. Making sure that optimization
>apply to all possible combinations of types and operators is not the best
>way to go. If some form of constant propagation does not work on unsigned
>char op long casted to (void *), and this is discovered only by nullstone
>but not by any benchmark, not even by programs ;-), then it's ok by me


I disagree. If you bother to implement an optimization, take the time
to do a thorough job. It's absurd to build the machinery for constant
propagation and then leave out multiply or divide because you didn't
think they would arise. Or build a dead code eliminator that doesn't
work on chars or doubles. I mean, you're going out of your way to be
less than general.


Furthermore, the weaknesses I discovered aren't things like "unsigned
char op long casted to (void *)"; they're basic weakness in the
algorithms used in the optimizer. Let's consider some of the tests,
just for the sake of a concrete example.


Here's the first example from my dead code elimination tests


void dead_test1(int *data) {
int j = data[0] + data[1];
j = data[2];
data[j] = 2;
}


Obviously the expression "data[0] + data[1]" is useless. No
particularly fancy analysis is required, I'm not depending on any
obscure corners of the C standard, no concerns here about
floating-point accuracy, etc. However, some compilers I examined
didn't get rid of the dead code.


Now, some people will argue that this sort of thing never comes up, or
that no one would write such a thing, or whatever. While I think
those arguments are bogus, that really sidesteps the central question,
namely:


If you don't get this right, what _do_ you get?


Hard to imagine that such a compiler would bother to have a -O flag.


For those who want to see my really hard tests (what's the smiley face
for sarcasm?), I've included several more cases for dead code
elimination below, along with some comments.


Preston Briggs


---------------------------------------


This one requires tracing through several statements.
Hard to miss if you get the 1st one, but still possible.


void dead_test2(int *data) {
    int j = data[0] + data[1]; -- dead
    int k = j + data[2]; -- dead
    int m = k + data[3]; -- dead
    int n = m + data[4]; -- dead
    j = data[2];
    data[j] = 2;
}


--------------------------------------


Here we've got to pay attention across a basic block boundary


void dead_test3(int *data) {
    int k = 0;
    int j = data[1];
    if (j) {
        k++;
        j = k + data[0] * j; -- dead
    }
    else
        k--;
    j = data[2];
    data[j] = 2;
    data[3] = k;
}


------------------------------------------


In this case, removing some dead code exposes other code as dead,
much like test2, but across a little control flow.


void dead_test4(int *data) {
    int k = 0;
    int j = data[1];
    if (j) {
        k++;
        j = k + j * data[0]; -- dead
    }
    else
        k--;
    if (data[4] & 1)
        k++;
    else {
        k--;
        j++; -- dead
    }
    j = data[2];
    data[j] = 2;
    data[3] = k;
}


--------------------------------------


Here we have to trace things around a loop. The simplest global
approaches (typical bit vector analysis) sometimes break down at this
point, although some people have hacks to handle this particular case.
A general approach (based on tracing through use-def chains) was
published by Kennedy in 1973. BTW, this case tends to arise as a
result of some forms of strength reduction. Of course, if you don't
attempt strength reduction, ...


void dead_test5(int *data) {
    int i, j;
    int k = 0; -- dead
    int stop = data[0];
    for (i=0; i<stop; i++) {
        k = k * data[1]; -- dead
        j = data[2];
        data[j] = 2;
    }
}


--------------------------------------


Here's my attempt to frustrate the hack alluded to above.
Probably have to use some sort of use-def chains for this case.
The usual choice these days would be based on SSA.


void dead_test6(int *data) {
    int i;
    int k = 0;
    int m = 0; -- dead
    int n = 0; -- dead
    int stop = data[0];
    for (i=0; i<stop; i++) {
        int j = data[1];
        if (j) {
            n = j * stop + m; -- dead
            k++;
        }
        else {
            m = n * stop + k; -- dead
            k--;
        }
        j = data[2];
        data[j] = 2;
    }
    data[3] = k;
}


-----------------------------------------


The last few tests all explored the compiler's ability to get rid of
useless control flow. A general approach for this was published in
1991. I won't include them here because they're bulky and repetitious
(and perhaps cover cases that don't matter so much).
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.