Re: Definable operators

Craig Burley <burley@tweedledumb.cygnus.com>
13 May 1997 22:50:44 -0400

          From comp.compilers

Related articles
[33 earlier articles]
Re: Definable operators monnier+/news/comp/compilers@tequila.cs.yale.edu (Stefan Monnier) (1997-05-08)
Re: Definable operators burley@tweedledumb.cygnus.com (Craig Burley) (1997-05-08)
Re: Definable operators burley@tweedledumb.cygnus.com (Craig Burley) (1997-05-08)
Re: Definable operators Dave@occl-cam.demon.co.uk (Dave Lloyd) (1997-05-12)
Re: Definable operators mfinney@lynchburg.net (1997-05-12)
Re: Definable operators burley@tweedledumb.cygnus.com (Craig Burley) (1997-05-13)
Re: Definable operators burley@tweedledumb.cygnus.com (Craig Burley) (1997-05-13)
Re: Definable operators pjj@cs.man.ac.uk (1997-05-14)
Re: Definable operators jkoss@snet.net (1997-05-15)
Re: Definable operators genew@vip.net (1997-05-22)
Re: Definable operators mfinney@lynchburg.net (1997-05-22)
Re: Definable Operators burley@tweedledumb.cygnus.com (Craig Burley) (1997-05-30)
| List of all articles for this month |
From: Craig Burley <burley@tweedledumb.cygnus.com>
Newsgroups: comp.compilers
Date: 13 May 1997 22:50:44 -0400
Organization: Cygnus Support
References: 97-03-037 97-03-076 97-03-112 97-03-115 97-03-141 97-03-162 97-03-184 97-04-027 97-04-095 97-04-113 97-04-130 97-04-164 97-04-169 97-05-151
Keywords: design

mfinney@lynchburg.net writes:


> Craig Burley <burley@tweedledumb.cygnus.com> writes:


> What we need is *not* a simple language, but a large, powerful
> language which is simple to use and understand. And where the
> features are orthogonal enough that not knowing or using a feature
> does not impact either the programmer's time (unless he/she is
> learning the feature, of course) or the program the programming is
> writing.


That's a great statement. I'd add that a great language would be one
where simple things are simple to express, and those expressions are
immediately grasped as meaning the original simple things -- while
complicated things would require expressions that not only "mapped"
well to the meanings, but were also very difficult to misread as
other, especially simpler things.


> Another point is that this idea that "there should only be ONE way to
> do something" has taken disasterous hold in the minds of most people
> involved in language design (excepting Perl). The disadvantage of
> that meme is that since there IS only one way to do something it is
> invariable very difficult. Also, you can't say what you mean.


Excellent point. I've never been very fond of the "one way only"
argument, since what it _usually_ means is "we'll give you one way to
express a variety of things that _we've_ decided need only one
_implementation_ on the machinery we care about".


So, all those different things end up looking like you are asking for
a particular implementation, thus hiding the algorithm. (This is a
general form of probably the largest stumbling-block dealing with
legacy code, such as old C or Fortran code, written to the
implementation rather than the design.)


> I have used macros to generate such conditionals in C/C++ for years
> and have found them to work very well. Perl also provides the
> "unless" (but does allow an "else" clause, so loses some of the
> benefit). I strongly believe that by adding such new control
> structures to a procedural language, you reduce code complexity.
> Because you map the code to problem with less "shoehorning".


Right. The linguistic meaning of "if foo is true then ..." is
different from "unless foo is true then ...", so the language should
allow either form to be expressed (_not_ just implemented). Since
people _think_ of the two expressions differently, they should be able
to write it differently.


> I think that such a language can be designed -- but you will never
> get it from any committee, and if designed and released it will
> probably not catch on. And, certainly, you can't get such a
> language by adding features at random. But, I think that you CAN
> get such a language. And, you can make it "safe" and "efficient" --
> both from a programmer's standpoint and a problem standpoint.


I personally need to learn a lot more about linguistics, ergonomics,
probably even things like dyslexia, before I could design such a
language. Though, designing a _prototype_ of a language, as part of
an overall project to learn the necessary things, seems a reasonable
first step. (In the meantime, I think I know enough to see that some
language features are better/worse than others, which means I'm
willing to exercise "veto power" to the extent I'm able.)


What I think might have to happen is lots more end users of these
languages insist on better, clearer, less-bug-inducing languages,
before compiler vendors finally decide to invest serious time and
effort _first_ in learning about language design, _second_ in
designing proper languages for the existing and future problem
domains, _third_ in making implementations that make these go fast.
(And these aren't priorities, really -- just steps.)


Right now, we seem to have compiler vendors that _first_ make the
implementations fast, _second_ add whatever extensions various
(sometimes paying) customers ask for regardless of their linguistic
sensibility, _third_ invest increasing resources coping with the
resulting complexity explosion (e.g. end users who don't understand
what they're doing, and thus blame the compiler), and _fourth_ defend
the various features they've added as "perfectly reasonable, besides,
our customers asked for them".


Actually I think it's 100% likely that the end-user community _will_
demand the change in priorities from us.


I'd just rather see it happen without first requiring substantial
increases in the yearly body and property-loss counts resulting from
poor language design for people to catch on.


I think every person who attempts to add a feature to an existing
language, or design a new language, for production use, should be
first required to watch every TLC/Discover/Nova/PBS program on airline
crashes and their causes. Air travel is (perhaps) the safest form,
but for various (mostly psychological and political) reasons, the few
air disasters are the most thoroughly researched, investigated, and
publically documented cases that are frequent enough to actually learn
from.


The series of events often necessary to precipitate such crashes seem
unbelievably unlikely, yet they do happen. Increasing involvement of
computers in these events means the man-machine interface, even at the
level of programmers writing applications and systems code, becomes
even more crucial (ref. Airbus crashes).


And, I'll note, one of the crashes I'm fond of telling people about
included a case of operator overloading as one of its preventable
causes.


Basically, the entire cockpit crew was overloaded to focus on
determining whether a light bulb was out or the equipment (landing
gear, IIRC) not engaged. Had the purpose of at least one of those
individuals been single-mindedly preserved -- "fly the plane, Bill,
we'll work on the light bulb", around 300+ people would have survived.
But the pilot in charge decided he could manage the complexity of
delegating tasks to the others, performing some of them himself, _and_
fly the plane at the same time. He was wrong.


Survival also would have happened had the tower, seeing the plane
slowly descend ahead of schedule, radioed the message "report your
current altitude please" instead of "is everything okay?". By the
time the crew finally did notice what they at-first thought was a
malfunctioning altimeter, it was too late -- but the tower asking for
confirmation of the altitude (in the earlier radio message) would have
been timely enough for them to notice that they'd somehow turned off
the auto-pilot during their attempt to assess the landing-gear
situation.


I think the last words were "Uh-oh". Does this sound familiar?


Think of the prompt "Ok to delete file (y/n)?" and compare it to, say,
"Please specify where source file is backed up: ", and consider to
which question a programmer's answer is most likely to later culminate
in "Uh-oh". (E.g. think of the cases where the programmer didn't
think he'd asked to delete any _source_ files.)


This is also why I don't like diagnostics like "type conflict in
assignment -- use a cast" from compilers. Too many programmers "fix"
the message by adding a cast, without noticing where there might be a
real bug before eliminating the final opportunity to catch it at
compile time. And _this_, of course, leads to realizing that
languages that require programmers to use things like casts, or
overloading, or textual substitution, in place of missing higher-level
constructs, are encouraging bugginess.


--
James Craig Burley, Software Craftsperson burley@gnu.ai.mit.edu
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.