From: | Laurent GASSER <gasser@ilw.agrl.ethz.ch> |
Newsgroups: | comp.compilers |
Date: | 23 Feb 1996 00:18:42 -0500 |
Organization: | Eidgenossische Technische Hochschule Zurich |
References: | 96-01-037 96-02-171 96-02-187 96-02-248 |
Keywords: | standards |
Arch Robison <robison@kai.com> wrote:
>Here's a solution:
>
> 1. Every year, put N language judges in a separate room. The
> judges would be good programmers, but not language lawyers. I
> leave this distinction to the reader.
>
> 2. Give each judge T hours to write down their description of the
> language.
>
> 3. Remove any feature from the language that K or more judges
> forgot to describe.
>
>If K=1, this is the extreme of making the language the intersection of
>what people remember.
I am trying to see if this could be a solution. Let say that I am
interested in the core of a natural language like English.
Most people will agree that around 2000-3000 words are enough to
sustain elementary arguments. Applying the test above, would a hunter
in the mountains select the same set of words than the fishermann at
sea? Surely not.
They would have a limited common set, but both could not live without
a specific extension, useless to the other. Some of these concepts
could be painfully derived from the common set, others cannot (steep
for the hunter, stream for the fisherman).
I consider this to hold for computer science as well. The test above
will reflect the common experience of the judges. Net specialists
take advantage of different parts of a language than database or
scientific computing ones (even at level of operators like string
concatenation, additions, I/O,...).
[Having written both books and software, I can say that if I were
writing 1500 page collaborative books with frequent updates, often not
by the original author, and in which small errors could make the
entire book illegible, I'd want to use a considerably smaller
vocabulary than I do in my books now. -John]
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.