Re: PL/I nostalgia

"robin" <robin51@dodo.com.au>
Sun, 30 Sep 2012 10:45:26 +1000

          From comp.compilers

Related articles
[5 earlier articles]
Re: PL/I nostalgia gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-04-28)
Re: PL/I nostalgia bobduff@shell01.TheWorld.com (Robert A Duff) (2012-04-29)
Re: PL/I nostalgia robin51@dodo.com.au (robin) (2012-09-19)
Re: PL/I nostalgia gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-09-19)
Re: PL/I nostalgia robin51@dodo.com.au (robin) (2012-09-21)
Re: PL/I nostalgia gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-09-21)
Re: PL/I nostalgia robin51@dodo.com.au (robin) (2012-09-30)
| List of all articles for this month |

From: "robin" <robin51@dodo.com.au>
Newsgroups: comp.compilers
Date: Sun, 30 Sep 2012 10:45:26 +1000
Organization: Compilers Central
References: 12-04-070 12-04-077 12-04-081 12-04-082 12-04-084 12-09-014 12-09-015 12-09-016 12-09-017
Keywords: PL/I, history, performance, comment
Posted-Date: 30 Sep 2012 00:14:23 EDT

From: "glen herrmannsfeldt" <gah@ugcs.caltech.edu>
Sent: Friday, 21 September 2012 5:00 PM


> robin <robin51@dodo.com.au> wrote:
>> From: "glen herrmannsfeldt" <gah@ugcs.caltech.edu>
>> Sent: Wednesday, 19 September 2012 1:56 PM
>
>>> Well, the dynamically allocated variables and save areas for PL/I are
>>> naturally slower than static allocated Fortran IV.
>
>> But not where it counts. By the time some procedure (such as INVERT)
>> is called, the array(s) has(have) been allocated. Allocation is a
>> once-off task, probably not measurable in terms of time.
>
> That comment was before the matrix inversion discussion,
> and is meant more generally.


So was my comment (in general terms). See "some procedure".


> Yes, if you are careful with your allocations, and minimize
> subroutine calls, then it isn't so bad.
>
> Current coding practices encourage more and smaller procedures
> than were usual in the early PL/I days.
>
>> And the FORTRAN IV code was, essentially, rigid, and required
>> re-compilation for larger arrays.
>
> For single task systems, most often the memory wouldn't be used
> for anything else, but, yes, it is convenient to dynamically
> allocate to the appropriate size. It is especially useful
> for multitasking systems.


It is essential for multitasking systems because having unused
array space in a program meant that other programs could not be loaded --
thus slowing down throughput.


As for single-task systems, it's important also, because unwanted memory
(array space) in a PL/I program could be freed during execution,
thus allowing (an)other array(s) to be allocated.


Re-compiling a FORTRAN program just to make an array size
larger was wasteful.


>>> Also, many PL/I features naturally don't optimize as well as Fortran.
>
>> That may be so, but to have to re-compile the FORTRAN code to deal
>> with larger-sized arrays counted strongly against it. As well, PL/I
>> offered full roll-out of fixed-size array operations. Not all arrays
>> needed to be dynamic. As well as that, PL/I offered such things as
>> double precision complex, string-handling, and error recovery.
>
> IBM Fortran has had COMPLEX*16 about as far back as PL/I, but most
> others didn't. (Though the most common use for COMPLEX, the FFT,
> is most often written using REAL arrays.)
>
> Compilers I know of have the overhead needed for RECURSIVE, even
> when the attribute isn't used. But again, minimize the number
> of procedure calls and it isn't so bad.


You wouldn't notice the time unless the procedure reference were in a loop being
ecxecuted thousands of times.


>> Error recovery more than compensated for any difference in speed that
>> may have existed between FORTRAN and PL/I. Having to re-run FORTRAN
>> code because of some error to find out what went wrong outweighed any
>> speed advantage that FORTRAN might have had, because in PL/I, the
>> error information was already there (including values of variables),
>> and without necessarily a program termination. Hence, a re-run of the
>> PL/I code was avoided.
>
> Pretty program dependent, but, yes, PL/I can make it easier.


It made it hugely easier, for turnaround time was around a week
in those times.


> On the other hand, keeping track of ON units through procedure
> calls is another increase in the time needed for a call.


Again, unless the procedure is in a loop that is executed thousands
of times, you wouldn't notice the time.


>> That was important, not only in terms of
>> machine time, but also in terms of turn-around time, because
>> turn-around time in those days was as much as a week.
>
>> 20 W 20 is more than large enough.
>> It's the size of a typical matrix in a typical job.
>
> Pretty small for many problems, but then most bigger than
> that should be done in ways other than inversion, such
> as LU decomposition.


It was common then to use larger array sizes with matrix inversion.
The 20 x 20 matrix was a reasonable size for the timing test.


> [Even in single task systems, dynamic allocation is useful
> since it means that variables only take up space when a
> routine is active. The Fortran version of that was overlays,
> which were a lot klunkier. -John]


Agreed. One large job (multiple regression) was able to run in PL/I
under PCP by organising various sections as BEGIN-END blocks,
so as each section completed, the storage was freed, making it possible to
enter the following section which required additional array storage.
Without that, the program couldn't run.


And while on the topic of running large programs in small memories:
an array of small integers was easily accommodated:


Instead of using BINARY, I chose FIXED DECIMAL (1).
That used one-quarter of the space compared to a 32-bit FORTRAN integer word,
and one-half the space of a default binary integer in PL/I.


PL/I on the /360 also permitted 16-bit signed binary integers, and these
could be used for large arrays with a saving of 50% over the FORTRAN integer.
[Unless someone has something to add that is more directly
related to compiler design, here endeth the skirmishing. -John]


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.