Re: Optimization techniques and undefined behavior

David Brown <>
Fri, 3 May 2019 17:23:49 +0200

          From comp.compilers

Related articles
[17 earlier articles]
Re: Optimization techniques and undefined behavior (Bart) (2019-05-02)
Re: Optimization techniques and undefined behavior (Bart) (2019-05-02)
Re: Optimization techniques and undefined behavior (Christian Gollwitzer) (2019-05-02)
Re: Optimization techniques and undefined behavior (Bart) (2019-05-03)
Re: Optimization techniques and undefined behavior (Martin Ward) (2019-05-03)
Re: Optimization techniques and undefined behavior (Andy Walker) (2019-05-03)
Re: Optimization techniques and undefined behavior (David Brown) (2019-05-03)
Re: Optimization techniques and undefined behavior (Bart) (2019-05-03)
Re: Optimization techniques and undefined behavior (Bart) (2019-05-03)
Re: Optimization techniques and undefined behavior (Andy Walker) (2019-05-04)
Re: Optimization techniques and undefined behavior (George Neuner) (2019-05-04)
Re: Optimization techniques and undefined behavior (George Neuner) (2019-05-04)
Re: Optimization techniques and undefined behavior (David Brown) (2019-05-06)
[10 later articles]
| List of all articles for this month |

From: David Brown <>
Newsgroups: comp.compilers
Date: Fri, 3 May 2019 17:23:49 +0200
Organization: A noiseless patient Spider
References: 19-05-014 19-04-021 19-04-023 19-04-037 19-04-039 19-04-042 19-04-044 19-04-047 19-05-004 19-05-008 19-05-014
Injection-Info:; posting-host=""; logging-data="8925"; mail-complaints-to=""
Keywords: design, errors
Posted-Date: 03 May 2019 13:59:03 EDT
Content-Language: en-GB

On 02/05/2019 21:04, Bart wrote: > On 02/05/2019 15:51, David Brown
wrote: >> On 01/05/2019 14:53, Bart wrote: > >>> That's just kicking
the can further down the road. >>> >> >> Yes (especially for the use
of a larger signed type).  That's fine.  C >> let's you easily kick
the can a /long/ way down the road.  How often do >> you need integers
that will overflow a 64-bit type? > > Most of your post seems to take
the premise that 64-bit types have such > a wide range that you can
forget about overflow problems.

I have never suggested "forgetting about overflow problems". I have
suggested that you make the effort to write correct code. If you are
overflowing, you have a bug - either in the implementation of the
function you want, or in its design or specification. One of the ways
you can often be sure that your calculations will not overflow is by
using bigger types - but it is certainly not the only way.

> But the principle is actually the same: two values A and B represented
> within N-bit types could overflow when performing arithmetic. It doesn't
> matter if N is 32 or 64.

Correct. But unless you are writing silly and unrealistic test code, or
working with something like cryptography that requires very large
numbers (in which case you are in a different type of coding
altogether), your integers won't overflow 64-bit types. And certainly,
using two's complement wrapping types will not give you the correct
answer in anything but a /tiny/ fraction of situations where UB
overflowing types fail. The key point is that UB overflow arithmetic
does not fail in appreciable numbers of cases where wrapping overflow
would succeed. (And as has already been established, UB overflow gives
more efficient code and better error checking and debugging.)

> The subject in UB, and whether the possibility of overflow can just be
> ignored by a language, a language that deals with low-level
> machine-sized types.

The answer is yes, it can.

In assembly programming - as low as you get - overflow is almost always

> You seem OK with a C compiler assuming that overflow cannot happen so
> that it can generate slightly faster benchmarks.

I am not the slightest bit interested in performance in benchmarks. I
am interested first in /correctness/ of correctly written and valid
code, and then in the efficiency of that code. For /real/ code, on real
targets, doing real work. And yes, I am happy that my compiler can
assume my calculations on signed integers don't overflow. If I need to
have specific behaviour on overflow, I don't use plain signed integers.

> I prefer a language acknowledging that it could happen, and stipulating
> exactly what does happen.

Then you might as well give up programming, because that can't be done.
There are lots of situations where behaviour is undefined, in /all/
programming languages. The lower level and more efficient the language,
the more such situations you get - but none are entirely free of
undefined behaviour. And the more you fight undefined behaviour, the
more limited your coding will be. Accept it, realise that it is part of
the world of programming, and you will get on much better.

Every function in computing (and this is fundamental to computation - it
applies to everything from a Turing machine to a quantum computer,
regardless of programming language) can be described in terms of
starting with a pre-condition, and establishing a post-condition. The
inputs to a function - its parameters, and any accessible global state -
must satisfy the pre-condition. Then the function guarantees to
establish the post-condition. The function does not say what will
happen if the pre-conditions are not fulfilled - calling the function in
that situation is undefined behaviour. "Undefined behaviour" simply
means "the behaviour is not defined" - there are no rules or
instructions to handle the situation.

A programming language might say that the precondition for its signed
integer addition "function" is "true" - that is, it works for any input
values. Another might say that the precondition is that its inputs must
sum to a valid value for the result type. (Yes, it's perfectly okay for
the pre-condition to be very like the operation itself.)

You can prefer that languages have "true" preconditions for some
functions, but they certainly won't have it for all.

>>> If you have two unknown values A and B, and need to multiply, you won't
>>> know if the result will overflow.
>> First off, how often do you actually have unknown values to deal with?
>> Usually you know something at least.
> The example here was reading values from a file. So they are external
> data, and will be unknown.

Only a fool uses unknown data from outside without checking them. Check
that the data makes sense, then use it. Don't use it first then check
for carnage afterwards.

Or if you do want to use the data without much checking (say, for a
quick test code, or for use with files you already know are valid), then
don't worry about undefined behaviour on invalid data, because you don't
care what results you would get for bad files.

>> bytes_per_pixel is not going to be more than, say, 16 - that's for
>> 32-bit per colour, including alpha channel.
> It shouldn't be, but you might be reading it from a file.

The data will be valid before you start using it - because you will have
checked the unknown data.

>> (And one thing we can be sure of here - having two's complement wrapping
>> arithmetic certainly will not help.)
> No; neither is unsigned number wrapping. But having gcc do something
> weird instead, at odds with nearly every other compiler and language, is
> even less help.

You are, as usual, very keen to pick out gcc as though it was something
special here. Compilers have been assuming signed integer overflow
never happens for over 20 years (that's just from my own personal
experience), long before gcc was that smart.

(Yes, I agree that wrapping unsigned arithmetic won't help here. I have
regularly said that most unsigned overflows are also bugs, and that
undefined behaviour there would also make sense. Maybe you "forgot"
that in order to try to make another anti-gcc remark?)

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.