Re: simple vs complex languages
8 Jun 2003 22:05:10 -0400

          From comp.compilers

Related articles
[37 earlier articles]
Re: simple vs complex languages (2003-06-05)
Re: simple vs complex languages (David Chase) (2003-06-05)
Re: simple vs complex languages (Yiorgos Adamopoulos) (2003-06-05)
Re: simple vs complex languages (Dave Thompson) (2003-06-05)
Re: simple vs complex languages (Lex Spoon) (2003-06-05)
Re: simple vs complex languages (2003-06-08)
Re: simple vs complex languages (2003-06-08)
Re: simple vs complex languages (Lex Spoon) (2003-06-20)
| List of all articles for this month |

Newsgroups: comp.compilers
Date: 8 Jun 2003 22:05:10 -0400
Organization: ...disorganized...
References: 03-04-095 03-05-182 03-05-199 03-06-010 03-06-034
Keywords: design
Posted-Date: 08 Jun 2003 22:05:10 EDT

Nick Maclaren wrote:
> writes:
> |> I think ambiguity is a very important feature of
> |> human communication (This is an unorthodox opinion, but I have a lot
> |> of professional experience, mostly in natural-language software, and
> |> it seems true to me). I can't easily imagine wanting that kind of
> |> dynamic in a language that specifies instructions to a machine.
> Good heavens! Is it really unorthodox? I know that I do it
> deliberately, and that many people do not realise when they are doing
> it, but I thought that it was the accepted model.

It's unorthodox among natural-language system designers. The marching
orders (usually the only marching orders the marketing guys or
business planners can sign off on for a product) usually boil down to
"extract the most likely parse and ignore all others in a process that
requires no actual human judgements" as an assumption so fundamental
to the design that it cannot be questioned by the time programmers are
actually involved.

IMO, this is one of the reasons natural-language systems have enjoyed
so little success.

It is not terribly unorthodox among linguists who *study* human
languages, but the field also has scads of ideologues who want to
revise, prescribe, or even artificially *create* languages for humans,
assuming that reducing or eliminating ambiguity would be a good thing
as a fundamental design principle. My opinion is that if these
ideologues got their way they would mentally cripple entire
generations by shackling their thinking and creative processes.
However, since that seems slightly less likely than pigs growing wings
and flying, I tend not to worry about it.

> A specification defines some behaviour in such a way that the
> boundary with undefined behaviour is relatively easier for the
> specification to define than it is for the programmer or user to
> avoid.
> A programmer or user then steps over the boundary by accident, and
> the product fails unsafe (often doing something completely bananas,
> like corrupting data).
> Somebody then tracks this down and complains to the relevant
> authors that this is an error in the specification or product, and is
> told "That is YOUR fault - the specification clearly says that is
> undefined behaviour."

Yes. Happens all the time. And the diagnostic symptom is that
something which is a subtle or trifling distinction to human
ambiguity-embracing thought processes is made crucial or semantic in a
language specification. As programmers, we think fairly precisely -
we have to. But our brains are still fuzzy devices which trade deeply
on generalizations and ambiguities, and we still tend to mean to say
something, write symbols that say something "close to it", and then
not notice the difference until the machine reminds us by doing what
we said instead of what we meant.

The problem is with computer languages that make it too easy to write
something "close to it" without noticing that it doesn't actually say
what we mean. To me, infix operators with complicated precedence and
association rules is absolutely begging for trouble.

Good computer language design (and good programming library design for
that matter) consists largely in finding a good mapping between
concepts meaningful to humans and instructions meaningful to
computers. It's rather like laying out the map of a city (an
organized structure that humans will have to use) on existing natural
terrain. You want to avoid putting useful and necessary operations
near undefined behavior in the same way you want high-traffic roads or
playgrounds not to be next to cliffs people can fall over. And you
want to keep semantically different operations syntactically very
distinct from each other, in the same way you want high-traffic roads
and playgrounds reasonably distant from each other in the city,
because a user mistaking one for the other could be disastrous.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.