Related articles |
---|
front ends vs. code generation and optimization poser@csli.stanford.edu (1990-06-06) |
Re: front ends vs. code generation and optimization mauney@cscraz.ncsu.edu (1990-06-12) |
Re: front ends vs. code generation and optimization sra@ecs.southampton.ac.uk (Stephen Adams) (1990-06-14) |
Re: front ends vs. code generation and optimization hankd@ee.ecn.purdue.edu (1990-06-14) |
Newsgroups: | comp.compilers |
From: | hankd@ee.ecn.purdue.edu (Hank Dietz) |
Date: | Thu, 14 Jun 90 15:22:25 GMT |
Organization: | Purdue University Engineering Computer Network |
Keywords: | code |
In-Reply-To: | <1990Jun14.032547.5623@esegue.segue.boston.ma.us> |
In article <1990Jun14.032547.5623@esegue.segue.boston.ma.us>:
>Tool-based front-end techniques can benefit the development
>of almost any software with a command or configuration language.
...[stuff about using yacc and lex to perform translations]...
>Back-end technology seems quite irrelevant to this kind of endeavor.
No. I will agree that many optimization/parallelization transformations are
quite specific to particular target machines, but the basic problems of code
generation are quite universal and yacc+lex do very little to automate this.
E.g.: Why can't attributes be dynamically allocated/deallocated (e.g.,
variable-length strings)? Why build a symbol table from scratch each time?
Why do we so often have to write code to build a parse tree and then to walk
it?
Although PCCTS as currently released (see earlier announcement) does not
integrate automatic parse tree generation and a code generator- generator, we
already have these features in a test version and they should be in the next
release (the first real release). Eventually, we even want PCCTS to
automatically build flow analysis/ optimization/ parallelization code from
specifications. Why? Because *WE* are getting tired of writing this stuff
time and time again... basically, we're lazy. That makes us good
toolsmiths. :-)
But my point is not "aren't we wonderful for building PCCTS." Rather, it is
that there really are general principles at work here which are every bit as
important as the traditional front-end stuff -- they are ignored mostly
because, IMHO, they simply are harder to deal with than parser construction.
Certainly, we've found it to be harder.
As for what we teach, well, I teach two graduate compiler courses at Purdue
Electrical Engineering. Both cover the relevant theory, but it is easiest to
describe them briefly in terms of the projects:
ee595e A compiler tools course. Each student builds a toy compiler
by hand (as review of undergrad compilers), then builds a
lexical analyzer generator, a parser generator, and a tree-
walking code generator... all toy tools, but functional.
ee695e Code generation, optimization, and parallelization. Each
student builds a toy compiler which does basic block analysis,
optimization, parallelization, and scheduling for multiple
pipelines or a VLIW; then this project gets scaled-up to use
global flow analysis.
It is certainly true that ee595e is the more basic, more generally
applicable, of the two -- but notice that it still spends at least 1/3
of the time automating code generation!
-hankd@ecn.purdue.edu
PS: Neither course requires more than about 30 pages of coding (in C),
i.e., the ee595e tools are highly functional for the effort involved.
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.