Re: Placement of memory loads

"BGB / cr88192" <>
Thu, 5 Nov 2009 15:13:48 -0700

          From comp.compilers

Related articles
Placement of memory loads (YuGr) (2009-11-05)
Re: Placement of memory loads (BGB / cr88192) (2009-11-05)
Re: Placement of memory loads (Kaz Kylheku) (2009-11-06)
| List of all articles for this month |

From: "BGB / cr88192" <>
Newsgroups: comp.compilers
Date: Thu, 5 Nov 2009 15:13:48 -0700
References: 09-11-012
Keywords: optimize, code
Posted-Date: 06 Nov 2009 11:24:52 EST

"YuGr" <> wrote in message

> I want to figure out the best way to place initial memory loads in
> generated code.

> My current approach is very simple and inefficient. I do a liveness
> analysis and then insert memory loads for all undefined variables in
> the beginning of initial basic block.
> The problem is that this simple approach leads to artificial
> conflicts (e.g. x and y are in conflict) and high register pressure.

> 1) where and how do you usually place loads in generated code?
> 2) how do you approach conditional operators, loops, etc.?
> 3) can you recommend some paper which covers this problem in detail? I
> haven't found any mentions of it in my textbooks (Aho, Cooper).

my strategy is simple enough and seems to be fairly effective:
I start in some initial state (locals are on stack, args are maybe on stack
or in regs);
if I need to synchronize, I may force everything back into memory;
if a need to load is encountered, a variable is loaded, and subsequently
assumed to reside in a given register;
any writes to a variable in a register mark it as dirty;
if I really need a register, something may be picked and "flushed", which
will either discard it (if not dirty), or write it back to memory (if
I often synchronize things at labels and prior to function calls.

typically, most aspects of control flow are ignored, and internally the
compiler pretends as if flow proceeds linearly through the function.

in general, the compiler concerns itself almost entirely with the "present
state", and generally ignores upcomming instructions (its behavior is then,
almost purely responsive to particular "code events"). however, I use a
multi-pass strategy, and in a few places "magic bits" are used to flag
places where a bad decision had been previously made, allowing some small
amount of fine tuning to avoid places where "something bad happened".

(this reminding me of the ST: TNG episode where they were stuck in a time
loop and messages were being sent to a "past" Data to fine-tune certain
decisions and avert destruction).

sadly, all this relies on my compiler being deterministic, since
non-determinism would throw the whole process into chaos. multiple passes
are used, and help resolve certain issues (such as register allocation), but
each has to remain essentially lock-step with the others (and simply
dummy-out parts which are not relevant in a given pass, such as generating
output, ...).

so, alas, it is all fairly naive, but seems to work well enough most
of the time. granted, my aim is not usually for max possible
performance, mostly just to reduce generating lame code.

it is worth noting that my current compiler does not use SSA, mostly
because thus far I have not been able to successfully bridge the gaps
which exist between my compiler and SSA form (nor can I personally all
that easily imagine how SSA works, which I guess does not much help
matters, ...).

personally, I find a "behavioral" model of all of the parts of the compiler
process easier to imagine, and in general it seems to work fairly well
(different parts have certain "behaviors", and act in response to a stream
of "events" representing the input code, which in my case is an RPN-based

it is a whole lot easier to figure out "well, that screwed me over, don't do
that next time" than it is to figure out "how will this decision effect
things in the future?...".

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.