Related articles |
---|
Complexity of compilation peb@procase.com (1993-06-30) |
Re: Complexity of compilation michi@km21.zfe.siemens.de (1993-07-01) |
Newsgroups: | comp.compilers |
From: | michi@km21.zfe.siemens.de (Michael Klug) |
Keywords: | theory |
Organization: | Siemens AG, Munich, Germany |
References: | 93-06-092 |
Date: | Thu, 1 Jul 1993 06:50:42 GMT |
>What is the inherent complexity of compilation? I figure that an
>assembler would be nearly O(n) time for n lines of code (assuming that
>symbol table lookup is not a problem--say, an O(1) hashing method would be
>used that assumes a average number of lines of source). For C++ or
>optimization, this gets far more complex. O(n*Log(n)) seems likely for
>C++ (an intuitive guess), but the graph problems that optimizers must work
>on are probably more complex (certainly they seems to use lots of space,
>like O(n^2)).
... there is no 'inherent complexity of compilation', and you can't
give any figures for a whole compiler. The complexity of compilation
depends on the parsing method (LL, LR, LALR, Early, ...) needed, and on
the semantic structure of the language. If you can parse a language
without semantic information, then you are lucky. This means parsing is
(relatively) cheap. The complexity of compilation depends also on the
method necessary for semantic analysis. A *cheap* language will allow you
to perform semantic analysis while parsing, a *heavy* language requires an
extra intermediate representation and complex semantic actions. A lot of
compilers are based on (sort of) attribute grammars. If you manage to
write down a sort of single-sweep AG (this means, you need only one pass
through the tree), then you can use efficient algorithms. If not, then
you need inefficient algorithms dealing with multiple passes and
circularity. It is always time consuming if you have lots of non-local
information to deal with which has not been computed when you need it.
The complexity of compilation depends also on the quality of the code
generated. Some real fast compilers don't bother about efficient
programs, they produce bad code. Compilers with a good code optimizer
can't be fast. In fact you need an extra phase with techniques like 'term
rewriting' to detect common subexpressions and code pieces which can be
translated to very efficient machine code. If someone says "my compiler
is faster than ...", just look at the code produced ...
Bye
Michael
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.