Re: Defining polymorphism vs. overloading

Piercarlo Grandi <>
Thu, 20 Sep 90 18:42:18 BST

          From comp.compilers

Related articles
[15 earlier articles]
Re: Defining polymorphism vs. overloading mmengel@cuuxb.ATT.COM (1990-09-11)
Re: Defining polymorphism vs. overloading (1990-09-10)
Re: Defining polymorphism vs. overloading (Piercarlo Grandi) (1990-09-13)
Re: Defining polymorphism vs. overloading (1990-09-14)
Re: Defining polymorphism vs. overloading (1990-09-15)
Re: Defining polymorphism vs. overloading (1990-09-20)
Re: Defining polymorphism vs. overloading (Piercarlo Grandi) (1990-09-20)
Re: Defining polymorphism vs. overloading (Chip Morris) (1990-09-15)
| List of all articles for this month |

Newsgroups: comp.compilers
From: Piercarlo Grandi <>
In-Reply-To: <>; from "" at Sep 15, 90 4:23 am
Keywords: polymorphism
Organization: Compilers Central
Date: Thu, 20 Sep 90 18:42:18 BST

I wrote:

pcg> What we really want is to be able to express notationally:
pcg> * reuse of interface
pcg> * reuse of semantics
pcg> * reuse of implementation

Bill Voss ( wrote:

    How about a positive example of what you want? From my Smalltalk-80 & C++
    background, it looks to me as though the following are basically equivalent.

     Overloading <=> * reuse of interface
     Polymorphism <=> * reuse of semantics
     Inheritance <=> * reuse of implementation

I'd tend to disagree as to the details; for example inheritance normally
implies reuse of interface as well. Polymorphysm seems to be mostly reuse of
implementation, but also of course is reuse of semantics.

While my ideas on the subject are not well crystallized, i'd like to see
something along the lines that follow:

* The ability to define notationally a "protocol" or interface, and
the ability to have concrete implementations that say "I am
accessible via that protocol". Something like Ada packages, but
you should be able to have *simultaneous* multiple implementations
accessible. You also want probably some algebra on protocols.
(this protocol extends this; these two protocols are unified in this,
but for these and these aspects).

* The ability to define notationally any set of pre and post conditions
and invariants, or other ways of specifying semantics, and have
concrete implementations say that they adopt a particular semantics.
This of course must be possible to many levels. We also want some
algebra on semantics of course.

* The ability to have implementations that are parametric with respect
to control flow (control abstraction), functions (higher order
functionals), and types (generic or polymorphic). Some algebra
is obviously implicit here (apply this implementation skeleton to
this domain).

Notice that the same specification can be applied to radically different
implementations, and even to different interfaces; and so on. The same
implementation can have radically different semantics, and even interface,
when instantiated on a specific domain. The same protocol can be associated
with wildly different semantics or implementations.

We want to reuse interface to build flexible systems; we want to reuse
semantics to build realiable systems; we want to reuse implementation to
build cheap systems.

As an example of the benefits of reuse of interface, just consider UNIX
pipelines; as long as a program has a filter interface (read stdin, write
stdout), you can combine them a lot. Functionals in Lisp are an example of
implementation reuse; you can (map) a lot of different things over a list.
As to reuse of semantics, these are harder to find; but an example may be
that no matter which interface or implementation you use, a program that
depends on a simple string search function will not have to be carefully
reexamined again each time you try a new one, because they are all
understood to do the same thing.

In Modula-3 we have some idea about reuse of interfaces; in C++ and
Smalltalk we have abstract classes for interfaces, but they are are an extra
linguistic convention. In Eiffel we have inheritance of some semantics. In
polymorphic languages we have some more flexible reuse of implementation. In
ML we have powerful functional abstraction, and in SL5 or Scheme we have
some control abstractionas well.

Notice that supporting all this flexibility requires a lot of effort. There
are some hints that the best environment is some form of Scheme with
symbolic reduction, i.e. something akin to a supercompiler.

But, and this is the more important thing, what it really requires is better
insight in the various aspects of the reuse problem, not the blind or ad-hoc
choice of notational features for languages we have now.

voss> NOTE: reuse of implementation seems to require reuse of semantics.

Not really. You can use exactly the same code skeleton to do things that are
radically different, *as long as it is parametric*, which is of course
essential if it is to be reusable.

Piercarlo "Peter" Grandi | ARPA:
Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET:

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.