Related articles |
---|
Grammars for future languages schinz@guano.alphanet.ch (1995-10-22) |
Grammar for future languages (2) jaccom@knoware.nl (Jacco van Muiswinkel) (1995-11-09) |
What does the world want from a programming language? (Re: Grammar for torbenm@diku.dk (1995-11-09) |
Re: What does the world want from a programming language? (Re: Grammar jthill@netcom.com (1995-11-13) |
Newsgroups: | comp.compilers |
From: | torbenm@diku.dk (Torben AEgidius Mogensen) |
Keywords: | syntax, design |
Organization: | Department of Computer Science, U of Copenhagen |
References: | 95-10-103 95-11-040 |
Date: | Thu, 9 Nov 1995 12:30:09 GMT |
"Jacco van Muiswinkel" <jaccom@knoware.nl> writes:
>"What does the world want from a programming language?"
>Lets cautiously analyse what "the world" wants from a programming
>language.
>1. Reliable code
>2. Readable sources
>3. No hassles
>4. Short development time
>5. Portable code
>ad 1. Reliable code:
>The reliability of programmes is, as far as I am concerned a major
>issue. Programming languages should have
>mechanisms to support the programmer in maintaining reliable code.
A source of many unreliabilities in programs is the programmer having
to do lots of tedious administration of resources (storage etc.). The
language should provide automatic administration of commonly used
resources, e.g. storage. Automatic storage allocation and reclamation
is a way towards this, and is slowly becoming adapted into mainstream
languages where it has for a long time been considered inappropriate
due to a reputation for being slow. I also think we will/should see a
move towards having communication/multithreading as an integral part
of languages, rather than being bolted onto the language through
library calls. A simple thing is also management of temporaries. If
you have to explicitly provide a variable to store the result of each
intermediate calculation (as in assembler or when manipulating strings
in C) this can cause errors (through erroneous misuse or declaration
of variables) and decrease readability.
A good compile-time consistency check (e.g. type check) can also
increase reliability. Use of a single type for many purposes, e.g. C's
use of integers to represent booleans, characters and numbers, can
also let many errors remain uncaught. How often have you written
if (x=0) {...}
in C and used a lot of time to find this error, which would have been
caught by a type system that separated booleans from integers?
Other sources of unrealiability is through bad readability or erronous
porting, more about these later.
>ad 2. Readable sources:
>The source code (if sourcecode is still what we're talking about)
>should be highly readable. This is a point that is highly related to
>the subject of the above mentioned posting. What is readable code?
[example deleted]
> The layout of the programme looks like the
>second line is being executed 100 times but it isn't. Python has a
>way around this by using indentation as a means to notate compound
>statements. I think that is a great concept and concepts like these
>should assist the programmer.
The syntax can provide help in readability in several ways. Jaccos
point is that the syntax should make the structure evident, not
obscure it (as it may in LISP or C). Using "active" indentation makes
sure that the indentation reflects the structure and is hence a good
thing. I do, however, often find such off-side rules (as they are
often called) ill described, as they are normally given separately
from the grammar itself. We might need some new grammar theory or
notation to describe these things properly, as they really lie across
the traditional separation of lexical and syntactical issues.
In addition to showing structure clearly, it should be possible to
describe application specific data or specifications in a notation
that is readable by non-programmers in the application area. As an
example, engineers should be able to recognize their formulas in the
code, so it should not be obfuscated by forced use of prefix or
postfix notation.
Compact notation also improves readibility, as it increases the amount
of code that you can see at a time. This can be done the wrong way (as
in C) by simply shortening all "keywords" to one or two characters or
the right way through proving stronger language primitives that allow
the same thing to be specified using less symbols (where symbol !=
character).
While syntax plays a large role in readability it is not the only
issue.
Another source of unreliability is imprecise descriptions of or
non-intuitive behavoir of language constructs. If it is not 100% clear
to the programmer what the meaning of a bit of code is, it can hamper
the understanding of the program as a whole and may obscure errors
caused by unexpected behaviour of a program construct.
>3. No hassles:
>Languages like ADA, as far as I have seen them, protect the
>programmer against sloppy programming, but hamper the programmer by
>asking extensive decalarations of exit and entry conditions.
Functional languages have found a good solution to this: The language
does extensive type checking, but you do not need to declare the types
of variables and functions (though you are allowed to do so, to
increase readability and catch "consistent type errors"). This gives
the protection of a strong type system without the hassle.
>4. Short development time:
>Rapid prototyping, and time to market. I didn't invent them, but
>they are contemporary issues. I think the only way to attain this
>goal is by extensive use of libraries and multiple grades of
>programmers within one company.
Another way is to provide very high level constructs in the language,
that allow working prototypes to be written quickly, and then, staying
in the same language, optimized to production level efficiency. While
extensive libraries may help yo to quickly prototype code that is very
similar to something you have done before, they are no help whatsoever
when developing entirely new code.
Another way to decrease production time is (semi-)automatic
configuration of code. If you can write a general (and possibly
inefficient) program and then automatically specialize this to a
particular configuration or subproblem, obtaining efficient code for
this, you don't have to write and maintain so many similar, but yet
different, programs. While you may argue that this is not a language
issue, but a tool issue, it is important to design the language to
allow correctness-preserving transformations to be easily done. If
the meaning of a bit of code depends too heavily on non-adjacent code
that is not referred to in the code itself, such transformations are
hard to guarantee correct.
>5. Portable code:
>I guess this is not a programming language topic but one that has to
>do with operating systems. The main problem in
>portability is that you "have" to port, since there is not yet a
>winner commercially so users of multiple platforms have to be
>served. Yet it is a problem that has to be solved in
>languages/libraries.
A hindrance to portability is implementation-specific behaviour of
language constructs. The prime example of this is C. Even if a C
program only refers to the outside world (through the OS) in trivial
ways, such is simple i/o, you can not expect the program to work if
you recompile it on another platform. A common problem is the size of
integers. If you assume an integer is 32 bits long, you get into
problems when you compile on a machine where integers are 16, 24 or
64 bits long. Even such a trivial thing as integer division is
implemenation specific. Other common problems is different orders of
evaluation on different platforms, or even at the same platform with
different optimizing levels.
A portable language should have no implemenation specific features
whatsoever. This, obviously, causes problems when interfacing to the
outside world, as operationg systems aren't identical. A solution (as
Jacco describes) is to pack such OS dependencies into libraries, such
that the program has an uniform interface through the libraries, even
though the code in the libraries change according to the OS. Here a
problem is that the differences in graphical user interfaces may be so
radical that the level of abstraction needed to allow the same code to
run under Windows, MacOS and X (while conforming to the standard way
of interfacing that the OS suggests) is very high. You e.g. can't use
a notion of "pressing the middle mouse button", as the hardware may
not provide this. You need to abstract to levels such as "select menu
item", "place cursor in text" etc., which may be done radically
different ways on different platforms (e.g. by use of pull-down menus,
pop-up menus or by pressing a control-key). Such a level of
abstraction may well not be able to use a reasonable amount of the
"features" available by the GUI.
Instead of using libraries to separate OS-specific parts of the
program from the general parts, a langauge may provide other
abstraction mechanisms. OO is one possibility, but there may be
others. OO certainly hasn't solved all portability problems.
Summing up, this posting have said more about what _not_ to do than
what to do. And the things I have suggested that you should do are
taken from various existing non-mainstream languages. Such things
just take time to filter into the mainstream. This is partly due to
lack of professional support for such languages, and an unwilingness
in the industry to move to new languages, especially if they are
radically different from what they are used to.
Torben Mogensen (torbenm@diku.dk)
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.