Re: syntax extension, was Why context-free?

mpah@thegreen.co.uk
8 Dec 2005 02:34:07 -0500

          From comp.compilers

Related articles
[4 earlier articles]
Re: syntax extension, was Why context-free? 148f3wg02@sneakemail.com (Karsten Nyblad) (2005-11-01)
Re: syntax extension, was Why context-free? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2005-11-02)
Re: syntax extension, was Why context-free? haberg@math.su.se (2005-11-02)
Re: syntax extension, was Why context-free? toby@telegraphics.com.au (toby) (2005-11-04)
Re: syntax extension, was Why context-free? henry@spsystems.net (2005-11-26)
Re: syntax extension, was Why context-free? haberg@math.su.se (2005-11-27)
Re: syntax extension, was Why context-free? mpah@thegreen.co.uk (2005-12-08)
Re: syntax extension, was Why context-free? rfigura@erbse.azagtoth.de (Robert Figura) (2005-12-15)
Re: syntax extension, was Why context-free? nmm1@cus.cam.ac.uk (2005-12-15)
Re: syntax extension, was Why context-free? mpah@thegreen.co.uk (2005-12-15)
| List of all articles for this month |

From: mpah@thegreen.co.uk
Newsgroups: comp.compilers
Date: 8 Dec 2005 02:34:07 -0500
Organization: http://groups.google.com
References: 05-10-05305-11-004 05-11-014 05-11-028 05-11-115 05-11-122
Keywords: syntax, theory
Posted-Date: 08 Dec 2005 02:34:07 EST

In this context, you may be interested to note a possibly extreme
interpretation of my finding that the language machine effectively
contains the lambda calculus, namely that 'grammar contains
mathematics'. See http://languagemachine.sourceforge.net/lambda.html
and http://languagemachine.sourceforge.net/curried_functions.html. The
finding itself is summarised on one page of A4 (pdf) at
http://languagemachine.sourceforge.net/language_machine_outline.pdf.


In designing the metalanguage for the language machine I deliberately
avoided encouraging deeply bracketing structures. As I show in the
lambda experiment, you can think of rules that recognise and
subsititute grammatical sequences as equivalent to functions with
matching in the style of ML and HASKELL, except that nested recognition
phases can and do most of the time occur as part of the matching
process. So if you encourage deep nesting, it could occur on both the
left- and right-sides of rules, and the results can become pretty
difficult to arrange on the page.


Of course rules in the language machine are essentially simple
replacements, and the metalanguage is intended to emphasise that fact.


Incidentally, once you understand that unrestricted rules that
recognise and substitute grammatical sequences are so closely related
to functions in functional languages, it becomes clear why it is so
hard to make sense of unrestricted rules when they are written in the
traditional Chomsky/BNF generative direction - it's the equivalent of
having to write the name and arguments of the function after its body.
Our brains are very flexible, true, but they can also get pretty bent.
Analysis is the coalface of language, generative engines cannot
directly do any kind of analysis, and thinking indirectly about things
that can be tackled directly is probably bad for the brain.


Somewhere in the history of this topic was quite a lot of discussion
about van Wijngaarden w-grammars. It seems to me that they were a
valiant attempt to stretch the envelope of what you can do with the
generative model of language. But now, in the language machine you have
a reasonably efficient and usable system which is now shown to be
Turing-complete, and which directly applies unrestricted analytical
rules.


Any volunteers to attempt translating w-grammars to rules in the
language machine? It might even be feasible - whether desirable or
useful I'm really not sure.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.