|HLL syntax & structure suited to rapid compilation? email@example.com (gswork) (2002-07-24)|
|Re: HLL syntax & structure suited to rapid compilation? firstname.lastname@example.org (Steve Horne) (2002-07-25)|
|Re: HLL syntax & structure suited to rapid compilation? email@example.com (Marco van de Voort) (2002-07-31)|
|Re: HLL syntax & structure suited to rapid compilation? firstname.lastname@example.org (Joachim Durchholz) (2002-08-04)|
|Re: HLL syntax & structure suited to rapid compilation? email@example.com (Tzvetan Mikov) (2002-08-10)|
|Re: HLL syntax & structure suited to rapid compilation? firstname.lastname@example.org (Marco van de Voort) (2002-08-10)|
|Re: HLL syntax & structure suited to rapid compilation? email@example.com (Marco van de Voort) (2002-08-14)|
|Re: HLL syntax & structure suited to rapid compilation? firstname.lastname@example.org (Joachim Durchholz) (2002-08-23)|
|[3 later articles]|
|From:||"Steve Horne" <email@example.com>|
|Date:||25 Jul 2002 23:29:32 -0400|
|Organization:||Customer of PlusNet|
|Keywords:||parse, performance, comment|
|Posted-Date:||25 Jul 2002 23:29:32 EDT|
On 24 Jul 2002 02:22:03 -0400, "gswork" <firstname.lastname@example.org> wrote:
>I understand that there are various methods used to speed up
>compilation, such as the way memory is used and the parsing methods
I'm no expert, but I can certainly give some pointers here...
The syntax of a language can effect its compile time in a number of
subtle ways, but the effects should be relatively small. One reason is
that a complex grammar may need larger parsing tables which may need
more table compression, giving a slight run-time performance hit to
perform the table lookups.
The most significant issues, though, are probably how verbose the
language is (how big an input file needs to be to fulfill a certain
set of requirements) and how small the grammar rules are. If grammar
rules tend to be small, you get more reduces per shift (or equivalent)
in the parser which could create a small hit.
In practice, I imagine these are non-issues - though removal of unit
rules and flattening of rules used purely as shorthands in the grammar
can, I believe, give a significant performance increase.
More important, perhaps, is how 'explicit' the grammar is. For
instance, in C and C++ a parser needs to look at run-time generated
tables of identifiers to disambiguate many rules. If the
disambiguation was done using language keywords in the grammar, C and
C++ source code would probably be faster to parse - as long as the
gains from being explicit outweigh the losses from being more verbose.
Finally, a parser is likely to be faster if the class of grammar it
parses is weaker, provided the parser generator is able to recognise
and exploit that simplicity. YACC-alikes, for instance, allow operator
precedence and associativity rules to be specified rather than
building these directly into the main grammar rules. This allows
expression parsing to be optimised such that it effectively uses LR(0)
mixed with precedence parsing, which can be implemented more
efficiently that LALR(1) or LR(1).
Much more important is the paradigm that the language supports.
Basically, higher level paradigms make more demands on the compiler
that make the compile time slower. Of course this makes sense when the
shorter development times and reduced maintenance costs are
Basic optimisations don't have a huge cost, but adding more and more
optimisation leads to greater and greater complexity for rapidly
diminishing returns. Compilers with very good optimisation probably
tend to be slower due to the compile-time effort applied to analysing
>I'd like to ask about the actual syntax of an HLL and how that might
>increase compilation. In x86 assembly, for instance, the mnemonics
>relate one-to-one with machine instructions. Compilation would be,
>presumably, straight translation. ...
[ In most simple compilers, the lexer is the slowest part. The time
spent parsing is down in the noise. -John]
Return to the
Search the comp.compilers archives again.