Re: Definable operators

jkoss@snet.net
15 May 1997 00:02:54 -0400

          From comp.compilers

Related articles
[33 earlier articles]
Re: Definable operators burley@tweedledumb.cygnus.com (Craig Burley) (1997-05-08)
Re: Definable operators Dave@occl-cam.demon.co.uk (Dave Lloyd) (1997-05-12)
Re: Definable operators mfinney@lynchburg.net (1997-05-12)
Re: Definable operators burley@tweedledumb.cygnus.com (Craig Burley) (1997-05-13)
Re: Definable operators burley@tweedledumb.cygnus.com (Craig Burley) (1997-05-13)
Re: Definable operators pjj@cs.man.ac.uk (1997-05-14)
Re: Definable operators jkoss@snet.net (1997-05-15)
Re: Definable operators genew@vip.net (1997-05-22)
Re: Definable operators mfinney@lynchburg.net (1997-05-22)
Re: Definable Operators burley@tweedledumb.cygnus.com (Craig Burley) (1997-05-30)
| List of all articles for this month |

From: jkoss@snet.net
Newsgroups: comp.compilers
Date: 15 May 1997 00:02:54 -0400
Organization: "SNET dial access service"
References: 97-05-167
Keywords: syntax, design

burley@tweedledumb.cygnus.com said:


      >> Operator overloading is NOT about lazy typing as you suggest.


      >I don't see where I suggested that.
      >I've been objecting to operator overloading being thought of as the
      >arbitrary use of lexemes for whatever the programmer thought they'd
      >be convenient for at the time of writing some code.


    This is why new languages should -define- the 'popular' cases of
overloading.


    Now that we have overloading in the most popular languages we can
easily do a statistical study of overloading, as well as roll-your-own
datatypes.


    Either define + for string concatenation, or define a concatenation
operator. Obviously one or the other is needed.


      >And, by extension, I've pointed out that, as long as we think of
      >operator overloading as one of several technical cure-alls, we won't
      >make ourselves design _proper_ language constructs to serve the
      >purposes so crudely served by things like overloading.


    There is no point arguing about 'proper' language constructs. That
is extremely subjective. I think the use of a ';' to terminate
expressions isn't proper. I'm sure most everybody disagree's with me
but I really don't see why a language should make you use something
that is rarely ever necessary for the parser to figure out what is
going on (and with a little language planning, never necessary).


    Luckily, its just my opinion.


      >> It is not a "technical stupid-pet-trick". It is a powerful tool
      >>to reduce the apparent complexity of a problem as I first argued.


      >I should have been more clear -- I meant operator overloading as a
      >technical stupid-pet-trick in the sense of going beyond what an
      >operator would _normally_ mean. E.g. `+' means add --


    foo = 'abra' + 'cadabra'


    Do you suppose this means a mathematical add ?


    The simple fact is, there are 3 very logical ways + can be used on
strings that make sense (that I can think of!!).


    '3' = '1' + '2'


    'foobar' = 'foo' + 'bar'


    'I' = '#' + '&' ; # = ascii 35, & = ascii 38, I = ascii 73
                                                ; (at least on PC's)


      >the fact that
      >you can make it mean pretty much anything in C++ is a technical
      >stupid-pet-trick that results in people thinking it "obviously"
      >means concatenate when applied to character strings


    No, it means they CAN use it for concatenation. You make the
assumption about obviousness. I certainly don't. I know what is
'defined' in the languages I use. Anything else isn't obvious to me at
all. Nor is it supposed to be.


    If string + string isn't defined in the language, I make sure I
figure out what it means in the source code (or perhaps
preprocessor). The only people who should have problems are those who
make assumptions, or are ignorant of what is 'defined' in their
language.


    Have you been having problems?


      >(which, as I've
      >pointed out before, is simply wrong -- "1" + "2" doesn't obviously
      >mean "12", to some people it'd obviously mean "3").


    It could also mean "c".


      >> I refuse to acknowledge any difference in principle between
      >> overloading a + b, add (a, b) or a.add(b) yet earlier you defend
      >> overloading of itself.


      >Go ahead and refuse to acknowledge them, but they exist. E.g.
      >`add(a, b)' doesn't imply conversion of type to most people the way
      >`a + b' does, e.g. if a is an integer and b is a double-precision
      >floating-point value.


    As always, you need to know the language. I'm sure there is SOME
language somewhere, where int + float implies an int result because
all results are implied to be integer unless otherwise noted.


    Know thy language and thy overloading.


      >In _principle_, `a + b' means "add a and b",


    Opinion, yes?


      >and anyone who
      >understands the basics of math (expressed in Western notation) knows
      >that.


    Ahem. Anyone who has programmed in BASIC knows string + string implies
concatenation.


    Know thy language and thy overloading.


      >And most of _those_ people _would_ be
      >surprised if `a + b' did that. In any case, few people would
      >assume that a would be first converted to b's type, or vice-versa,
      >before the function ever started up. Again, many of those expect
      >that of `a + b'.


    Know thy language and thy overloading.


    (do I sound like a broken record? Good. Maybe you will hear me say this
      often enough to realize that a programmer is supposed to know whats
      going on! Thats his job!)


    As an example, should INT / INT return a FLOAT or another INT?


    Some languages would return a FLOAT.


    Some would return an INT.


    Others would refuse to operate because they lack support of a '/'
operator (to go along with the processor they run on which lacks a DIV
instruction)


      >(BTW, I've never been comfortable with the asymmetry of `a.add(b)'
      >and similar. Why does one operand get sent a message instead of
      >the other one, when they have equal weight in the computation and
      >in the meaning of the language expression?


    In this case, it looks like the programmer told the compiler which operand
function to use.


    a.add() obviously means use a's add() mechanism.


    The reason for this stuff is to -allow- that sort of thing. Using this
stuff is purely a choice to be made by the programmer. If you don't want
the power, don't use it.


      >that it doesn't imply commutivity as `a + b' should to everyone
      >[except C++ programmers ;-].)


    I think a + b should imply that 'a' is the decision maker, not 'b'.


    This conforms with the left-to-right priority of operator precedence...


    ...an extension of the reasoning in my book.


      >And since most of the programming I've done in my life has been on
      >programs having at least 100,000 lines of code, a couple of which
      >have had a million or so lines, and most of which have been worked
      >on by several programmers at the same time, I'm not really
      >interested in toy languages, or languages designed assuming all
      >sorts of safety mechanisms built in to the run-time environment.


    It seems to me, you should be in favor of languages that have all
sorts of run-time error checking. When working on projects with
multiple programmers involved, every effort should be made to
standardize as much as possible. Language-defined error-checking is
obviously safer than rolling your own and hoping the rest of your team
sticks with the method.


    I think this view of yours is contrary to your view on overloading.


    A standard here, no standard at all there.


    It's inconsistent, but thats all right. You are allowed to be if you
want to. After all, this stuff is just a bunch of opinions.


- Koss
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.