|Double-byte lex and yacc? email@example.com (Michael O'Leary) (1997-04-02)|
|Re: Double-byte lex and yacc? firstname.lastname@example.org (1997-04-03)|
|Re: Double-byte lex and yacc? Julian.Orbach@unisys.com (1997-04-03)|
|Re: Double-byte lex and yacc? email@example.com (Duncan Smith) (1997-04-06)|
|Re: Double-byte lex and yacc? firstname.lastname@example.org (Michael O'Leary) (1997-04-16)|
|From:||Duncan Smith <email@example.com>|
|Date:||6 Apr 1997 22:28:47 -0400|
|Organization:||Flavors Technology, Inc.|
Julian Orbach wrote:
> The language I was working with allowed Unicode characters to appear
> in strings, identifiers and comments. Significantly, none of the
> keywords required any more than the ASCII character set, and the lex
> specification didn't care *which* non-ASCII character it was.
> (I believe Java also satisfies these criteria).
Our parallel programming language, Paracell allows foreign language
characters in the same places, i.e. strings, identifiers and comments.
The supported foreign language happens to be Japanese, and the software
currently runs on Macs and uses Apples JIS. I am now porting it to NT
In addition to using extended characters in strings, identifiers and
comments, we also allow entering ALL text in Romaji, including
punctuation. My scanner converts all 2 byte Romaji and punctuation to
ASCII when building tokens. This allows the user to avoid mode
switching their keyboard all the time.
Return to the
Search the comp.compilers archives again.