|[4 earlier articles]|
|Re: syntax extension, was Why context-free? firstname.lastname@example.org (Karsten Nyblad) (2005-11-01)|
|Re: syntax extension, was Why context-free? email@example.com (glen herrmannsfeldt) (2005-11-02)|
|Re: syntax extension, was Why context-free? firstname.lastname@example.org (2005-11-02)|
|Re: syntax extension, was Why context-free? email@example.com (toby) (2005-11-04)|
|Re: syntax extension, was Why context-free? firstname.lastname@example.org (2005-11-26)|
|Re: syntax extension, was Why context-free? email@example.com (2005-11-27)|
|Re: syntax extension, was Why context-free? firstname.lastname@example.org (2005-12-08)|
|Re: syntax extension, was Why context-free? email@example.com (Robert Figura) (2005-12-15)|
|Re: syntax extension, was Why context-free? firstname.lastname@example.org (2005-12-15)|
|Re: syntax extension, was Why context-free? email@example.com (2005-12-15)|
|Date:||8 Dec 2005 02:34:07 -0500|
|References:||05-10-05305-11-004 05-11-014 05-11-028 05-11-115 05-11-122|
|Posted-Date:||08 Dec 2005 02:34:07 EST|
In this context, you may be interested to note a possibly extreme
interpretation of my finding that the language machine effectively
contains the lambda calculus, namely that 'grammar contains
mathematics'. See http://languagemachine.sourceforge.net/lambda.html
and http://languagemachine.sourceforge.net/curried_functions.html. The
finding itself is summarised on one page of A4 (pdf) at
In designing the metalanguage for the language machine I deliberately
avoided encouraging deeply bracketing structures. As I show in the
lambda experiment, you can think of rules that recognise and
subsititute grammatical sequences as equivalent to functions with
matching in the style of ML and HASKELL, except that nested recognition
phases can and do most of the time occur as part of the matching
process. So if you encourage deep nesting, it could occur on both the
left- and right-sides of rules, and the results can become pretty
difficult to arrange on the page.
Of course rules in the language machine are essentially simple
replacements, and the metalanguage is intended to emphasise that fact.
Incidentally, once you understand that unrestricted rules that
recognise and substitute grammatical sequences are so closely related
to functions in functional languages, it becomes clear why it is so
hard to make sense of unrestricted rules when they are written in the
traditional Chomsky/BNF generative direction - it's the equivalent of
having to write the name and arguments of the function after its body.
Our brains are very flexible, true, but they can also get pretty bent.
Analysis is the coalface of language, generative engines cannot
directly do any kind of analysis, and thinking indirectly about things
that can be tackled directly is probably bad for the brain.
Somewhere in the history of this topic was quite a lot of discussion
about van Wijngaarden w-grammars. It seems to me that they were a
valiant attempt to stretch the envelope of what you can do with the
generative model of language. But now, in the language machine you have
a reasonably efficient and usable system which is now shown to be
Turing-complete, and which directly applies unrestricted analytical
Any volunteers to attempt translating w-grammars to rules in the
language machine? It might even be feasible - whether desirable or
useful I'm really not sure.
Return to the
Search the comp.compilers archives again.