Re: Definable operators (was: Problems with Hardware, Languages,

Dave Lloyd <Dave@occl-cam.demon.co.uk>
16 Mar 1997 23:31:17 -0500

          From comp.compilers

Related articles
Problems with Hardware, Languages, and Compilers hrubin@stat.purdue.edu (1997-03-07)
Definable operators (was: Problems with Hardware, Languages, and Compi rrogers@cs.washington.edu (1997-03-09)
Re: Definable operators (was: Problems with Hardware, Languages, Dave@occl-cam.demon.co.uk (Dave Lloyd) (1997-03-16)
| List of all articles for this month |

From: Dave Lloyd <Dave@occl-cam.demon.co.uk>
Newsgroups: comp.compilers
Date: 16 Mar 1997 23:31:17 -0500
Organization: Compilers Central
References: 97-03-037 97-03-043
Keywords: design

> [IMP72 let you stick BNF in your programs so you could add any
> syntax you wanted. The problem was that everyone did, so before you
> could read an IMP72 program you first had to figure out what
> language it was written in. Experience with C++ tells me that even
> operator overloading can easily lead to unmaintainable code if you
> can no longer tell at a glance what each operator does. -John]


I have been programming happily in Algol 68 for many many years - the
language that introduced user-definable operators to the mainstream of
imperative programming. And I must say I find John's comments
surprisingly tinged with Luddism! Even Fortran users are now
discovering the joys of extending mathematic operators (over the
quaternions say) or inventing new operators such as .CROSS. between
vectors. Like everything a certain discipline and common sense is
required: if I overload "+" it usually means some form of addition and
never some completely inappropriate operation.


Every library defines a language to go with it. It may not be
syntactic, but semantic: how the various procedures and values are to
be used together and with external interfaces. You must always be
aware of the domain of such a library when writing code to use
it. Library-defined operators are really no worse than library-defined
procedures.


The human mind is not really that good at handling a lot of
complexity. Science and mathematics has always been a game, in part
at least, of inventing notation powerful enough to reduce complexity
to an understandable level. Maxwell formulated his equations in
component terms (and much respect to him for it) but modern
practitioners use vector calculus, partly because of the independence
from a coordinate frame but as much if not more because more
understanding can be derived from
        \del \vec (\bf {v} \vec \bf {B})
than the 3 component terms each involving 2 differentials of the
components of the cross-product (each of which is itself a sum of two
products). The functional form
        curl (vector_product (v, b))
gets lost in the brackets in larger equations just as it does with
arithmetic between integers and reals.


Within our compiler, I have a set of operators "+", "-", etc., that do
not evalutate anything but deliver an algebraic representation of the
operation. This allows quite complex bits of intermediate tree to be
constructed and transformed with greater precision and less headache.
They are then assembled into machine instructions further down stream.
As example, complex division ("$" is another algebraic operation
describing the selection of a field from a runtime record).


PROC compl div = (RECORD l, r)RECORD:
(
        OPERAND r rstar = ID (r $ re * r $ re + r $ im * r $ im) ;


        conflate ((( l $ re * r $ re + l $ im * r $ im ) / r rstar,
                              ( r $ re * l $ im - r $ im * l $ re ) / r rstar))
);


Regards,
----------------------------------------------------------------------
Dave Lloyd mailto:Dave@occl-cam.demon.co.uk
Oxford and Cambridge Compilers Ltd http://www.occl-cam.demon.co.uk/
Cambridge, England http://www.chaos.org.uk/~dave/
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.