|Incremental compilation email@example.com (1992-01-23)|
|Re: Incremental compilation firstname.lastname@example.org (1992-01-23)|
|Re: Incremental compilation email@example.com (1992-01-23)|
|Re: Incremental compilation firstname.lastname@example.org (1992-01-24)|
|Incremental Compilation email@example.com (Alexander Rozenman) (1999-11-02)|
|Re: Incremental Compilation firstname.lastname@example.org (Matthew Economou) (1999-11-03)|
|Re: Incremental Compilation maratb@CS.Berkeley.EDU (Marat Boshernitsan) (1999-11-05)|
|Re: Incremental Compilation email@example.com (Robert Bowdidge) (1999-11-16)|
|Re: Incremental Compilation firstname.lastname@example.org (Jan Gray) (1999-11-25)|
|From:||email@example.com (Larry Drebes)|
|Organization:||Netcom - Online Communication Services (408 241-9760 guest)|
|Date:||Fri, 24 Jan 92 04:06:54 GMT|
>From article 92-01-081, by firstname.lastname@example.org (Steve Boswell):
> [re keeping tokenized source around] ...
> I was wondering if the time it took to do the token-wise diff was longer
> than the time it would take to re-parse the entire file.
I am doing something similar, but storing a hash on the previous lexical
stream ( minus tokens designated for comments). What I really need to know
is the change in the content of a single routine, therefore the basic parse
is still needed. For me the parsing has never been a bottleneck, its the
steps that follow; which can be minimized to just the routine(s) that have
Return to the
Search the comp.compilers archives again.