Re: Optimization techniques

Kaz Kylheku <847-115-0292@kylheku.com>
Fri, 26 Apr 2019 00:18:15 +0000 (UTC)

          From comp.compilers

Related articles
[13 earlier articles]
Re: Optimization techniques david.brown@hesbynett.no (David Brown) (2019-04-23)
Re: Optimization techniques david.brown@hesbynett.no (David Brown) (2019-04-23)
Re: Optimization techniques rick.c.hodgin@gmail.com (Rick C. Hodgin) (2019-04-24)
Re: Optimization techniques martin@gkc.org.uk (Martin Ward) (2019-04-25)
Re: Optimization techniques david.brown@hesbynett.no (David Brown) (2019-04-25)
Re: Optimization techniques 847-115-0292@kylheku.com (Kaz Kylheku) (2019-04-25)
Re: Optimization techniques 847-115-0292@kylheku.com (Kaz Kylheku) (2019-04-26)
Re: Optimization techniques 847-115-0292@kylheku.com (Kaz Kylheku) (2019-04-26)
Re: Optimization techniques alexfrunews@gmail.com (2019-04-26)
Re: Optimization techniques derek@_NOSPAM_knosof.co.uk (Derek M. Jones) (2019-04-26)
Re: Optimization techniques martin@gkc.org.uk (Martin Ward) (2019-04-26)
Re: Optimization techniques martin@gkc.org.uk (Martin Ward) (2019-04-26)
Re: Optimization techniques 847-115-0292@kylheku.com (Kaz Kylheku) (2019-04-26)
[90 later articles]
| List of all articles for this month |

From: Kaz Kylheku <847-115-0292@kylheku.com>
Newsgroups: comp.compilers
Date: Fri, 26 Apr 2019 00:18:15 +0000 (UTC)
Organization: Aioe.org NNTP Server
References: <72d208c9-169f-155c-5e73-9ca74f78e390@gkc.org.uk> 19-04-021
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="18804"; mail-complaints-to="abuse@iecc.com"
Keywords: optimize, design
Posted-Date: 25 Apr 2019 21:35:07 EDT

On 2019-04-25, David Brown <david.brown@hesbynett.no> wrote:
> On 25/04/2019 17:36, Martin Ward wrote:
>> David Brown <david.brown@hesbynett.no> wrote:
>>> And there are undefined behaviours which could be given definitions, but
>>> doing so is not actually a positive thing.  The prime example is signed
>>> integer overflow.  Since overflowing your integers is almost always an
>>> error in the code anyway, there are no benefits in giving it defined
>>> behaviour.
>>
>> This is completely backwards. If signed overflow was given a defined
>> behaviour (such as the two's complement result), then compilers for
>> CPUs which do not implement two's complement operations
>> would have to generate less efficient code (but does anyone still
>> make such a CPU?).
>
> True - but as you say, such cpus are not realistic.
>
>> Any program which guarantees not to overflow would be unaffected.
>
> Incorrect. Its correctness would be unaffected, but efficiency can be
> quite a bit different.
>
> The compiler knows all sorts of things when signed integer arithmetic
> does not overflow, and can use this for optimisation.
>
> It knows that "x * 2 / 2" is "x". It knows that "x + 1 > x" is true.


Problem is, all these propositions are not safe; they are based on
"the program has no mistake".


According to the original design of C, the programmer is supposed to
optimize for this kind of thing.


The compiler should only micro-optimize: deal with the code-generation
issues: eliminate wasteful instructions it has introduced itself via
peephole scanning, thread jumps, do some constant folding, unroll loops,
inlining, that sort of thing.


Think about it: why is it okay for assembly language instruction
sets to have defined overflow? Why don't we worry that, "Gee, if
the add instruction has defined overflow, then the assembler
can't deduce that 'cmp r1, r2' which follows it is always false, and
cannot remove that instruction, as well as the whole basic block of code
it controls."


Figuring out that cmp r1, r2 is always false for all the inputs that can
ever occur, and blowing away everything that is behind it, is the
programmer's responsibility.


> It knows that "for (i = 0; i <= n; i++)" will always end, after n + 1
> iterations - after checking for n being <= 0 at the start of the loop.


But the loop will not end if n is INT_MAX. It will predictably not end
if i++ is well-defined behavior, and it is not required to end if i++ is
undefined behavior.


All we usefully know is that, if the loop ends in a well-defined way,
then it must be that n < INT_MAX and, after the loop, i >= n.


We know this regardless of whether i++ is always defined or not.


> These are significant. With modern compilers doing all sorts of
> inlining, function cloning, cross-module optimisation (link-time
> optimisation)


Link-time optimizations can be readily shown to violate the "phases of
translation" abstract model given in the C standard; for that reason,
they are not enabled by default.


In the C language, the translated units of a program that are to be
linked to form the program have undergone semantic analysis.


So no further optimization is permissible; it constitutes semantic
analysis. Translation unit boundaries should be regarded as inviolable
optimization barriers.


>, and with modern C++ code bases using templates, there is
> a great deal of scope for this kind of manipulation and optimisation.
> People don't write "x * 2 / 2" directly, but after inlining, constant
> propagation, constants, macros, etc., these sorts of things turn up.


That's the most dangerous situation to be making these assumptions.


If * 2 came from one macro, and / 2 from another, maybe it wasn't meant
to be optimized.


Programmers who write labyrinths of macros and templates and whatnot in
medium-level languages like C and C++ should be punished by poor
performance.


> The other big thing about making overflow undefined is that the compiler
> and tools can help check for mistakes. Compilers can spot cases where
> there are overflows with compile-time constants, and tools like
> sanitizers can add run-time checks to help debug code.


Run-time checks for overflow can be had even when overflow is
well-defined.


Assembly language instruction sets have well-defined overflow *and*
ways to detect it (flags, exceptions and whatnot).




> So when your
> program has counted up 2147483647 apples, and you add another one to the
> cart, your debugging tools can tell you you have made a mistake. If you
> are using wrapping integers, the tools have to believe you meant that
> adding another apple gives you -2147483648. Your code has a bug, but
> the tools can't help you out.


That is simply false. We can diagnose this situation anyway.
If that results in millions of false positives, then maybe those
developers should rethink their abuse of wrap-around arithmetic.


Tools can provide configurations to deal with floods of false positives.


Diagnostic tools should never be limited to what is defined or undefined
at the language level. They should ideally be customizeable.
Well-defined conditions can be wrong in the given program.
E.g. "I'd like to trap when a value greater than 42 is written into the
variable x, even though it's well-defined to do so."


Just because something is made well-defined in the language doesn't mean
that it's blessed as a recommended practice; it's just "decriminalized".


The purpose of making it well-defined isn't that programmers "go to
town" wrapping 2147483647 to -2147483648 like it's going out of style.


The purpose is mainly to make the software predictable. When this mistake
happens, it will happen in the same way, with consequences deducible
from the abstract language semantics rather than "what version V of
compiler X does on platform Y".


Reproducibility first, then speed.


> It is a serious mistake to mix up "defined behaviour" and "correct
> behaviour". Only defined behaviour can be correct, but you can't fix
> incorrect code by making it defined behaviour.


Of course! Because: incorrect programs can be written in a language in
which no behavior is left undefined at all.


I have bignum integers and rational numbers in Common Lisp, and a great
numeric tower, yet still produce a garbage program where (sin x)
appears where (cos x) ought to have been written, or whatever.


Correct is with respect to the program's specification.


Undefined can be correct. If the program's specification is
"write a C program that dereferences a null pointer",
then something like "*((char *) 0) = 42" might appear in
that program. The program won't be "correct" if we delete that.


In C, "undefined behavior" is a large territory that encompasses
documented extensions.


It only means "not required by the ISO C standard". For instance, using
a platform-specific function is "undefined behavior". However, the
platform supplies a definition of behavior; the program isn't defined by
ISO C, but by the platform, so according to ISO C, it is nevertheless
undefined.


>> With the current situation, anyone wanting to avoid
>> undefined behaviour (and don't we all?) has to write code like
>> this for any signed operation:
>>
>> signed int sum;
>> if (((si_b > 0) && (si_a > (INT_MAX - si_b))) ||
>>     ((si_b < 0) && (si_a < (INT_MIN - si_b)))) {
>>   /* Handle error */
>> } else {
>>   sum = si_a + si_b;
>> }
>
> No, they don't.
>
> When I want to add two integers, I usually write "a + b". But then, I


I certainly have code like that in the implementation of a higher
level language that switches to bignum integers when the operands
oveflow, and so returns the arithmetically correct result under all
conditions, memory permitting.


The overflow check can be expressed more efficiently if we can rely
on wraparound.


A lot of people are doing computing with this sort of system nowadays,
and a lot of them, or portions of their runtimes, are written in C.


--
TXR Programming Lanuage: http://nongnu.org/txr
Music DIY Mailing List: http://www.kylheku.com/diy
ADA MP-1 Mailing List: http://www.kylheku.com/mp1


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.