Tue, 24 Oct 1995 09:40:24 GMT

Related articles |
---|

Grammars for future languages schinz@guano.alphanet.ch (1995-10-22) |

Re: Grammars for future languages torbenm@diku.dk (1995-10-24) |

Re: Grammars for future languages mfinney@inmind.com (1995-10-24) |

Re: Grammars for future languages RWARD@math.otago.ac.nz (Roy Ward) (1995-10-26) |

Re: Grammars for future languages wdavis@dw3f.ess.harris.com (1995-10-26) |

Re: Grammars for future languages timd@Starbase.NeoSoft.COM (1995-10-30) |

Re: Grammars for future languages jaidi@technet.sg (1995-11-09) |

Re: Grammars for future languages martelli@cadlab.it (1995-11-04) |

[12 later articles] |

Newsgroups: | comp.compilers |

From: | torbenm@diku.dk (Torben AEgidius Mogensen) |

Keywords: | syntax, design |

Organization: | Department of Computer Science, U of Copenhagen |

References: | 95-10-103 |

Date: | Tue, 24 Oct 1995 09:40:24 GMT |

schinz@guano.alphanet.ch (Michel Schinz) writes:

*>The grammars of the majority of today's programming languages (C[++],*

*>Ada, Pascal, Eiffel, Dylan, etc.) are Algol-like. By this, I mean that*

*>they have a different syntax for almost every concept and special*

*>support for arithmetic operators.*

*>However, there are (at least) two main exceptions: Lisp-like grammars*

*>and Self/Smalltalk-like grammars.*

*>Algol-like grammars are believed to be easier to understand and closer*

*>to the usual (mathematic) notations and english. On the other hand,*

*>they have problems: they are big (hard to learn and remember) and the*

*>operator/function-call distinction is a big problem. For example, in*

*>C++ you can overload existing operators but you cannot define new*

*>ones. In Eiffel, you can define new operators, but you cannot define*

*>their priority and associativity.*

*>On the other hand, Lisp-like and Smalltalk-like grammars are very*

*>simple: only one or two notation are used for everything. However,*

*>the notations used for arithmetic operations do not conform to the*

*>mathematical notation.*

[...]

*>Also, even if being close to the mathematical notation was once very*

*>important, because the vast majority of programs used mathematics a*

*>lot, this isn't true anymore. Ok, there are still a lot of*

*>mathematical programs, but there is also a wide range of computer*

*>applications which simply do not need a special notation for*

*>arithmetic operations (compilers are an example).*

You are basically arguing that, since many programs only do little

arithmetic, it is O.K. to drop mathematical notation and use a more

uniform grammar, like Lisp etc.

While Lisp-like grammars may have the advantage that you don't have

to remember very many syntactical constructs (and studies have

indicated that syntactic details indeed are what you forget first),

this is an advantage only when writing a program. When you later have

to read or modify a program, the uniform syntax of Lisp etc. makes the

job of understanding the code worse. I have used both Lisp, Scheme and

"Algol-like" languages extensively, and I find that the Lisp syntax is

a problem when reading old code.

An alternative to dropping mathematical notation on the basis of

little use, is to extend the use of mathematical notation to

non-numerical computations. This is done succesfully in functional

languages like Haskell, where (for example) you can write code like

[(f . f) x | f <- fncs, x <- elems, x < 17]

which takes the functions f from the list fncs and the elements x from

the list elems, such that x<17 and applies the functions obtained by

composing each f with itself to each x, constructing a new list

therefrom.

The notation is taken from mathematical set-notation and,

additionally, infix operators work on non-numerical objects (like

functions, lists, etc.)

*>Simple and uniform grammars also have a great advantage when one wants*

*>to add new features to a language, like object-oriented capabilities.*

*>With simple grammars, user-defined constructs look just like*

*>predefined constructs. That means that it is possible to add new*

*>language features without modifying the grammar, and thus the*

*>compiler.*

The breaking point here is not so much having a simple and uniform

grammar, but rather that the programmer has access to a parser for the

language, a parser that can be extended to include new features. Macro

features support this to some extent, but a full parser is more

flexible.

*>I think that this issue is an important one, because if all new*

*>languages are designed to have a simple grammar, parsing could slowly*

*>become much easier, and its importance in compilation would decrease.*

Parsing is not the hard part of compiling. Using a parser generator

tool makes this the smallest part. When people actually spend a lit of

time fiddling with their yacc-grammars, it is because of the very

primitive support for attributes (especially inherited atributes) or

because of conflicts caused by limited look-ahead or ambiguity (the

latter often introduced by changes to the grammar needed for

attribution).

Torben Mogensen (torbenm@diku.dk)

--

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.