Re: Is this a new idea?

dnl@macsch.com (David Lombard)
Wed, 4 Nov 1992 21:46:37 GMT

          From comp.compilers

Related articles
[2 earlier articles]
Re: Is this a new idea? clyde@hitech.com.au (1992-11-02)
Re: Is this a new idea? pcwu@csie.nctu.edu.tw (1992-11-03)
Re: Is this a new idea? ryer@inmet.camb.inmet.com (1992-11-03)
Re: Is this a new idea? byron@netapp.com (Byron Rakitzis) (1992-11-04)
Re: Is this a new idea? ttk@ucscb.UCSC.EDU (1992-11-04)
Re: Is this a new idea? dak@sq.sq.com (1992-11-04)
Re: Is this a new idea? dnl@macsch.com (1992-11-04)
Re: Is this a new idea? tmb@arollaidiap.ch (1992-11-06)
Re: Is this a new idea? henry@zoo.toronto.edu (1992-11-08)
Re: Is this a new idea? clyde@hitech.com.au (1992-11-07)
Re: Is this a new idea? dlarsson%abbaut@Sweden.EU.net (1992-11-11)
Re: Is this a new idea? macrakis@osf.org (1992-11-11)
Re: Is this a new idea? pardo@cs.washington.edu (1992-11-12)
[6 later articles]
| List of all articles for this month |
Newsgroups: comp.compilers
From: dnl@macsch.com (David Lombard)
Organization: Compilers Central
Date: Wed, 4 Nov 1992 21:46:37 GMT
Keywords: performance, comment
References: 92-10-124 92-10-113

In a recent posting, Jawaid Bazyar writes:
> The only technique that I can think of that's similar to this is called
>"pre-tokenization", and can be used to speed up a compile significantly in
>some types of languages. Basically, you have an editor which tokenizes
>the source as you type. Tokenization is, of course, the part of compiling
>that breaks a source program into the individual symbols and what not.
> ...
> How much speed increase you get depends on the complexity of other parts
>of the compiler. The speedup for an assembler is massively better than
>the speedup for, say, a C compiler. The difference is that there is much
>more work after lexing/parsing for high level language compilers.


I think he has described the very nature of the problem in this paragraph.
The execution time of any *decent* compiler is not dominated by the
lexical scan, it's the optimization and code generation that costs.


Yes, an assembler can benefit greatly (it's a transliteration process
after all). But a compiler??? No. That's similar to the argument that
the single operation statement `i++' results in *better* code than `i =
i+1'. Only a graduate student's compiler would be so unsophisticated as
to actually benefit from the syntax (clearly, the syntax is useful as part
of a larger statement).


If tokenizing truly helps a compiler's speed, your compiler can't be doing
much more than producing threaded code, perhaps with peephole
optimizations. I wouldn't spend too much (time, money, effort) on such a
system.


The bold facing you mentioned was also used on the Apple Macintosh Pascal
system, but it was interpreted, and had a *nasty* habit of assuming too
much about the few characters it had seen.
[Maybe, but see Ken Thompson's paper on the Plan 9 C compiler that I
mentioned last week. -John]
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.