using optimizing compiler technology to advance software engineering

Marshall Cline <>

          From comp.compilers

Related articles
using optimizing compiler technology to advance software engineering (1989-10-30)
using optimizing compiler technology to advance software engineering (Marshall Cline) (1989-11-15)
Re: using optimizing compiler technology to advance software engineeri (1989-11-03)
| List of all articles for this month |

In-Reply-To:'s message of 30 Oct 89 23:04:05 GMT
From: Marshall Cline <>


I'd like to respond to your request for "optimizers which help SE", but
I'd like to answer the *opposite* question: Can we use CASE tools to help
produce better code than could be produced without SE? Ie: what you want
is to promote SE. Seems to me that the question "can optimization make
SE less costly" is pretty lame. A more powerful boost to SE is: "Can we
use good SE to produce what is actually *faster* code?

The basic notion that I'll present is that "more high-level information
means more chance for high-level optimization." I cite recent research
which Doug Lea ( and I (
are doing with annotating C++ code. We call the system "A++" for
"Annotated C++".

One of the basic problems with many languages is that they have *very*
limited ability to specify *semantic* information. Indeed this is the
reason programmers are encouraged to write lots of comments -- ie: the
language proper doesn't support any way to say what you *mean* by the
code, so you have to put it in a comment. We see "annotations" for C++
as a way to support such semantic information.

One of the obvious benefits of this is Correctness: formal verification
can be employed to check if what you *said* (the code) is consistent
with what you *meant* to say (the annotations). Thus *lots* more
static (compile-time) analysis can be done on annotated code.

Another benefit, although not immediately obvious, is Optimization. It
turns out that the additional semantic information can be employed by
the compiler to help generate better code. For example, many runtime
consistency and exception tests are redundant; formal verification can
be used to *prove* them to be redundant, so they can be (automatically)
removed. This reduces actual runtime testing to a minimum (or *nearly*
minimal anyway) without changing the source code. Thus A++ ("Annotated
C++") can be thought of as an "Exception Optimizer."

OTHER DESIGN GOALS OF A++ (Warning: The following is "Strongly Hyped"):

[1] INHERENTLY OBJECT-ORIENTED: Unlike many other "program verifiers",
A++ is inherently OO'ed. Ie: typically annotators allow you to
annotate what code *does* to data, thus employing to the live-code
dead-data notion. A++ on the other hand allows you to annotate objects
as a whole.

[2] CLARITY OF THE CODE: Runtime tests typically litter code. A++
allows programmers to cleanly express what "states" are "legal" for
particular operations. The result is that annotated code is actually
more readable than unannotated code since explicit error tests such as
"if (!ok()) punt()" are replaced by much simpler behavioral

side-benefits is that C++ programming can become more "natural". Right
now, we often have to increase the number of functions which can
directly touch the data for efficiency's sake (ex: we employ excessive
numbers of friend functions, member fns which provide unchecked
access/modification to member vars, etc). It turns out that the
efficiency gains provided by a little bit of "annotation" (ie: A++) can
eliminate many redundant runtime tests, thereby (in some cases anyway)
making the "natural" interface just as fast as unchecked access code.

[4] BACK TO OPTIMIZATION: We believe that the Optimization aspect is
the "carrot" which will convince programmers to *use* A++. Software
Engineering in general (and OOP in particular) is often flamed for
resulting in a large runtime cost. We believe A++ could reverse this,
making C++ code run *faster* than an equivalent C program (reason: we
can know, on a call-by-call basis, which exception tests are really
*needed*; raw C programmers have to work a lot harder to check things
out since they have to do it "by hand.") Furthermore we handle
aliasing problems, which must be played *very* conservatively by normal
C/C++ compilers, which can obviously have a *big* performance impact on
RISC machines.

Marshall P. Cline/ 225 Clarkson Hall/ ECE Dept/ Clarkson Univ/ Potsdam NY 13676
Internet: -or-
BitNet: bh0w@clutx Usenet: uunet!!cline
Voice: 315-268-3868 (office)

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.