Re: Online explanation of Dijkstra's Guarded Command Language (Vidar Hokstad)
6 Jun 2004 15:27:36 -0400

          From comp.compilers

Related articles
Online explanation of Dijkstra's Guarded Command Language (2004-05-30)
Re: Online explanation of Dijkstra's Guarded Command Language (SM Ryan) (2004-06-06)
Re: Online explanation of Dijkstra's Guarded Command Language (2004-06-06)
Re: Online explanation of Dijkstra's Guarded Command Language (2004-06-12)
Gated Single Assignment [Was: Online explanation of Dijkstra's Guarded (Tommy Thorn) (2004-06-14)
| List of all articles for this month |

From: (Vidar Hokstad)
Newsgroups: comp.compilers,comp.lang.oberon
Date: 6 Jun 2004 15:27:36 -0400
References: 04-05-101
Keywords: Oberon, optimize
Posted-Date: 06 Jun 2004 15:27:36 EDT (Chris Glur) wrote
> Hi,
> Google didn't find me a 'Dijkstra's Guarded Command Language'
> description. Can some one point me to an online reference.
> I'm interested in methods of 'transforming code' , other than lisp.
> I've started analysing M. Brandis' these of 1995 re.
> `guarded single-assignment form' compiler. [ for Oberon]

I think you need to explain what you are looking for a bit better -
it's not very clear how the three things you mention are related to
what you are looking for.

If you by "transforming code" means code generation, then Brandis
thesis isn't the best starting point, as it's not focusing on basic
code generation techniques. I'd recommend looking at a traditional
compiler text book, or something like LCC (C compiler with associated
book describing the design in detail).

If you're looking for approaches to do code optimisation by doing
tranformations on the compiler internal code representation, on the
other hand, Brandis is a concise overview of a very interesting
approach based on single assignment.

Guarded command language is relatively unrelated - it's a language
that introduced non-deterministic conditials and loop constructs,
where you can express "I want whichever one of these statements for
which the attached condition is true to be executed, but I don't care
about the order", or "I want one of these statements for which the
attached condition to be true, but I don't care which".

Guarded single assignment form is a varition of static single
assignment. SSA is a way of describing a program so that no memory
location can be assigned (and by extension modified) more than once.
This massively simplifies a lot of things, as for instance aliasing
won't be a problem.

GCL and GSA are orthogonal concepts.

Conceptually SSA is a similar model to what many functional languages
expose to the programmer, though implementation wise a good compiler
will "fold" the variables together into the same memory locations
based on an analysis of the lifetime of the various variable instances
(this is covered in Brandis thesis, adapted to GSA).

SSA is very flexible because statement order can be inferred from
variable references and any call to "external" functions that have
side effects, allowing the compiler to rearrange statements within the
constraints imposed by those two issues.

You may want to take a look at "Single-Pass Generation of Static
Single Assignment Form for Structured Languages" (1994) Marc M.
Brandis, Hanspeter Mössenböck, ACM Transactions on Programming
Languages and Systems. It's available here:

Guarded single assignment is simply a variation on SSA where
statements may be "guarded" by a condition controlling it's execution
or a "merge" (representing the joing of two control paths). It's been
too long since I read Brandis thesis to remember whether this presents
any particular benefits over other SSA representations, or whether it
is just meant as a convenient way to implement SSA for Brandis'

> His explanations, repeatedly branch off into discussions of
> optimisation, disrupting the flow of the basic understanding.

That's because his thesis is _about_ optimisation. The title
"Optimizing Compilers for Structured Programming Languages" should be
a giveaway. Transforming code into guarded single assignment form was
explicitly used as a tool to enable classes of optimisations that are
difficult to do in a classic abstract syntax tree/parse tree. I think
trying to disconnect the two would be counterproductive.

> Perhaps an understanding first of 'Guarded Command Language' would
> help ?

I don't think so - I can't see that the two have much in common,
except that GCL is also founded on a basis of a side effect free
system (or at least one where side effects are isolated to certain
well defined areas) thus allowing the compiler freedom to make more
decisions about code flow - in GCL's case allowing it to
non-deterministically decide loop execution order in certain cases,
for instance.

A general understanding of side effect free or "pure" functional
programming languages, or of static single assignment might be
beneficial if what you want to understand is GSA, though.

> This problem of resisting interesting side-tracks, and first doing
> the complete "hello world version", seems to be as difficult as
> giving up smoking, for people trying to explain their OWN system ?

If GSA is what you want to understand, what is your goal for
understanding it? SSA/GSA are most commonly applied specifically as an
enabling technology for optimisation and efficient code generation, so
it would seem to me that understanding how GSA applies to optimisation
as described by Brandis is a good way of approaching it. Alternatively
the paper referenced about on generating SSA from source is also
reasonable accessible.

Vidar Hokstad

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.