27 Nov 2005 00:35:20 -0500

Related articles |
---|

[3 earlier articles] |

Re: syntax extension, was Why context-free? nmm1@cus.cam.ac.uk (2005-11-01) |

Re: syntax extension, was Why context-free? 148f3wg02@sneakemail.com (Karsten Nyblad) (2005-11-01) |

Re: syntax extension, was Why context-free? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2005-11-02) |

Re: syntax extension, was Why context-free? haberg@math.su.se (2005-11-02) |

Re: syntax extension, was Why context-free? toby@telegraphics.com.au (toby) (2005-11-04) |

Re: syntax extension, was Why context-free? henry@spsystems.net (2005-11-26) |

Re: syntax extension, was Why context-free? haberg@math.su.se (2005-11-27) |

Re: syntax extension, was Why context-free? mpah@thegreen.co.uk (2005-12-08) |

Re: syntax extension, was Why context-free? rfigura@erbse.azagtoth.de (Robert Figura) (2005-12-15) |

Re: syntax extension, was Why context-free? nmm1@cus.cam.ac.uk (2005-12-15) |

Re: syntax extension, was Why context-free? mpah@thegreen.co.uk (2005-12-15) |

From: | haberg@math.su.se (Hans Aberg) |

Newsgroups: | comp.compilers |

Date: | 27 Nov 2005 00:35:20 -0500 |

Organization: | Mathematics |

References: | 05-10-053 05-11-004 05-11-014 05-11-028 05-11-115 |

Keywords: | syntax, design |

In article 05-11-115, henry@spsystems.net (Henry Spencer)

wrote:

*> >[human-readable theorem-prover output]*

*> >In addition, I have recently started to use Unicode UTF-8, using correct*

*> >mathematical symbols instead of ASCII words. This turns out to help up the*

*> >code a great deal.*

*>*

*> You might also want to have a look at Lamport's paper "How to write a*

*> long formula" (DEC SRC-119, 1994), possibly still findable at*

*> <http://gatekeeper.dec.com/pub/DEC/SRC/research-reports/SRC-119.pdf>,*

*> in which he relates some experience with notations for presenting*

*> lengthy program-correctness formulas.*

I have a vague memory of coming across this paper before. The kind of

constructions suggested here are already use in math in various forms,

used to simplify notation. The reason it does not show up so clearly

in Lamport's context, is that he is only focusing on formulas alone,

whereas I write on a program supporting statements like theorems

grouped into theories as well. Then it turns out that mathematicians

use those structures to simplify formulas, simply by putting

in definitions ahead of the formulas, instead of the "let .. in"

construction that Lamport suggests.

And as for the "if else" construct he discusses, my program is built

up around a semantic type, which right now is quite similar to that of

abstract syntax tree (AST), then called simply "semantic tree". One

difference is that the latter is usually an intermediate in language

translation, whereas in my theory, the former is the primary object.

(Another difference is that one may not use trees; in an alternate

theory, bound variables could for example be represented by arrows to

the heads that bind them. This then has certain advantages and

disadvantages.) Thus, the parser is just an interface for the

translation into the semantic tree, and there is no fixed requirement

of what syntax a parser should handle. This approach is necessary in a

context of pure math, because there is no agreement of what a syntax

should be, and if the mathematician cannot let notions and notation

flow together, the formulas quickly become unreadable. (One should

note that this approach also breaks off from much traditional

metamathematics, which is usually syntactic, not semantic, in its

study of the object mathematical theory.) Then, once one has internal

semantic trees to compute around, there is the opposite problem, the

"expressing" or "pretty-printing", and again, there is no fixed

requirement of what it should be. Right now, I have some hardwired

parsing and expressing that have the effect that the output is a quite

similar, but formatted version, of the input (which unproved

substatements marked down), also giving the chance, at will, to

inspect the found proofs as a form of debugging. This suffices for me

right now, but I am thinking of hooking up some more general parsing

and expressing engines later, and one reason of doing so is that it

will simplify programming, as I then can focus more on the computing

engine. So, if carried out, Lamport's "if else" suggestion can be done

by the one designing an input grammar, and a person which does not

like a grammar used for a particular input, could format the output

into something else.

As for the third suggestion, the numbering of subformulas,

similar constructs are in use in math, namely, in the form of a

theorem with subparts that should be proved separately. In this case,

one sometimes wants to do it, because a more general proof method can

be applied to a sequence of formulas. I am looking to implement such

things as well, which is not difficult, just taking up time, but

important, if one should be able to enter the theories in an efficient

human way.

Otherwise, the comparison between pure mathematics and the structure

of modern computer languages is very interesting. For example, pure

math is definitely declarative, and so is my program. The proof of a

theorem can have subtheorems, which correspond to nested environments

in a computer language. Theorems and other statements are grouped into

theories, and the latter correspond to the modules of a computer

language. For example, a reductio ad absurdum proof in math (which

appears frequently, hidden in various forms), is formally a relation

between two different theories, one with the contradiction one

without. So if one should adhere to strict formality, theories

developed as modules become necessary. And it seems that the lambda

calculus, which corresponds to functional languages, will show up

fairly quickly, needed to trace back statements when these have a

special form.

So, many of the logical structures used in pure mathematics and

computer language design, apparently developed the last couple of

decades wholly independently of each other, use similar underlying

logical superstructures. Perhaps it is due to the human way, perhaps

it is due to cross-feeding; I do not know. It suggests however,

though, that the two, in the future, will flow together more.

--

Hans Aberg

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.