Related articles |
---|
Decades of compiler technology and what do we get? robert@prino.org (Robert AH Prins) (2012-04-22) |
Re: PL/I nostalgia, was Decades of compiler technology and what do we gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-04-23) |
Re: PL/I nostalgia robin51@dodo.com.au (robin) (2012-04-25) |
Re: PL/I nostalgia gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-04-24) |
Re: PL/I nostalgia robin51@dodo.com.au (robin) (2012-04-28) |
Re: PL/I nostalgia gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-04-28) |
Re: PL/I code robin51@dodo.com.au (robin) (2012-05-05) |
Re: PL/I code gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-05-05) |
Re: Fortran calls, was PL/I code gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-05-06) |
Re: Archaic hardware (was Fortran calls) robin51@dodo.com.au (robin) (2012-05-09) |
From: | "robin" <robin51@dodo.com.au> |
Newsgroups: | comp.compilers |
Date: | Sat, 5 May 2012 00:45:24 +1000 |
Organization: | Compilers Central |
References: | 12-04-070 12-04-077 12-04-081 12-04-082 12-04-084 12-04-085 |
Keywords: | PL/I, history |
Posted-Date: | 05 May 2012 00:58:15 EDT |
From: "glen herrmannsfeldt" <gah@ugcs.caltech.edu>
>
> (big snip, then I wrote)
>>> Fortran didn't allow for recursion until 1990, and even then you
>>> had to have the RECURSIVE attribute. Compilers could still generate
>>> non-recursive code without the attribute.
>
>> All that's irrelevant.
>
> There is, at least, more overhead in the procedure entry/exit
> sequence for recursive routines.
More overhead, maybe, but how much more? Registers usually have to be
saved, a return address has to be preserved somewhere. When the
machine has a stack, those things can go on the stack. There appears
to be no extra overhead. When the machine does not have a hardware
stack, one must be simulated, or, space must be made available for
saving those things, at each recursive call. In that case, a request
from the OS may need to be made for the storage. The request will add
to overhead. But again, how much extra?
> Now, since PL/I also has the
> RECURSIVE attribute the compiler could generate different code
> in the non-recursive case.
And did, in the case of PL/I-F, AFIK.
BTW, the RECURSIVE attribute was required because of the possibility
of independent compilation, for it may not have been otherwise obvious
to the compiler that this particular procedure was in some sort of
recursive call chain.
> OK, to get an actual compiler question into the discussion,
> are there any compilers that generate non-reentrant code for
> a language that allows recursion when it isn't being used?
IBM PL/I for Windows generates different code for
procedures that are identical except for the attribute RECURSIVE.
Similarly for DR PL/I for DOS.
>> Furthermore, the necessity in FORTRAN of having all storage
>> as static meant that programs employing large arrays or lots of
>> arrays used the processor inefficiently in that the
>> executable programs tied up more memory than was necessary.
>> Compared to PL/I, which provides dynamic arrays, considerably
>> less run-time memory was required.
>
> That is true. For a single task system, like many before OS/360 and
> like OS/360 PCP on smaller system, though, if it fits there isn't much
> cost to the wasted memory.
Correct as far as it goes, but you've missed the point. Such a
program may not run at all on a PCP system owing to the sum of its
static storage requirements exceeding available memory.
However, in the same system, a similar PL/I program using dynamic
memory can run because (1) there is no wasted memory, and (2) the
demand for dynamic memory generally means that all the memory is not
required at the same time, resulting in significantly less memory
requirements. (For instance, storage taken by temporary arrays created
in one procedure are given back to the system when the procedure
terminates.) On a system like MVT, using excessive storage prevents
other programs from being loaded and (possibly) executed concurrently.
Where the run-time requirements of a program using static memory are
almost as large as available memory, once an MVT-style system decides
that the priority of the large program has become high enough, it will
stop scheduling small programs, and will permit other running programs
to terminate, until all the memory is free to run the large job.
>> [In case it is thought from the above that PL/I provides only
>> dynamic arrays, may I point out that PL/I provides dynamic
>> and static arrays, as well as controlled arrays.]
>
> I probably lost the thread that the comment was supposed to apply to.
> In a called procedure, it doesn't matter how the original array was
> allocated, it is referenced through the dope-vector (call by
> descriptor).
Correct. My remark was just by way of explanation for readers not
specially familiar with PL/I.
> In the case of a local array, the compiler can make some optimizations
> that it can't make for a procedure argument.
Like what? I know of no such restrictions in PL/I-F.
Nor in Windows PL/I, for that matter.
> Fortran 90 added assumed shape arrays, where the called array gets the
> shape, most likely through a descriptor similar to that from PL/I. In
> traversing an array through a descriptor, it isn't so easy to convert
> to a single loop.
In Fortran (90 or later), the descriptor holds, or can hold, the stride value.
Thus, a called procedure can access non-contiguous elements in a single loop.
> Through Fortran 77, the array passing method now called assumed size
> was usually used. A common way to use assumed size in Fortran was for
> the called routine to declare the argument as rank one and do all the
> index calculations internally.
But it was also common to use "adjustable" arrays,
and to deal with a matrix argument as a matrix dummy argument.
> Now, as Robin mentioned PL/I often made it easier to write correct
> programs. Processing a matrix through a rank one array is more
> difficult to get right.
Indeed.
>>> Simple PL/I expressions can require temporary arrays.
>>> (Usually with a warning message.)
>
>>> That didn't happen to Fortran until 1990, when machines were much
>>> faster.
>
>> That's relative, and in any case irrelevant.
>> Machines were much faster in 1966 compared to 1956.
>
> There is still much discussion on comp.lang.fortran on the speed
> difference between array expressions and explicit DO loops.
>
>>>> One other point is that the size of machines had increased
>>>> significantly in that early decade, permitting more language
>>>> features to be incorporated in PL/I.
>
>>> But the window was fairly small. IBM had originally expected to
>>> replace Fortran (and maybe COBOL) with PL/I. The compiler arrived
>>> late, ran slow, and generated much slower code.
>
>> You're wrong on both counts. The hardware was late. All the
>> compilers were late. The code from IBM's PL/I was just as good as
>> from IBM's Fortran compilers on the machine that we had. I still have
>> the results somewhere of test runs in FORTRAN and PL/I. It was the
>> link editor that took longer with PL/I than with FORTRAN.
>
> Certainly the link editor was always slow. If one wrote the same
> program in PL/I, the result might be close to that from Fortran.
>
> If one started using many of the PL/I features, though, it was very
> easy to get a much slower program. One might, for example, use FIXED
> DECIMAL variables that require conversion to binary when used as array
> subscripts. That is, at least, one problem that OS/360 Fortran
> programmers never had.
Decimal variables were not designed to be used as subscripts. Decimal
variables are typically used in holding data for general computation,
mostly, but not exclusively, for commercial work. One application I
recall was for a large array holding values smaller than 10. Decimal
variables were just right for that application because FIXED DECIMAL
(1) required only one byte of storage. Compared to FORTRAN, which
used 4 bytes for an integer, the large array in PL/I used one-quarter
of the storage of its FORTRAN counterpart. That observation also
applies to arrays whose values are -1, 0, and 1.
>> I should have been more explicit. Code generation from IBM's
>> FORTRAN compilers on the machine that we had was reasonably
>> efficient too.
>
>> What is just as important is what happens when a program runs.
>> FORTRAN programs often halted (division by zero, floating-point
>> overflow) without any indication of where or why they halted.
>
> OS/360 Fortran has a nice traceback feature. I don't know when it was
> added, though. On the other hand, in the fairly common branch to a
> non-existing address, both Fortran and PL/I left you lost.
Branches to non-existent addresses in PL/I is something that wasn't even at all
"fairly common".
In PL/I, for any significant error, not only did you get a traceback
in English, you also got line numbers for the error location
and for the statements that called the procedure.
I think that I also mentioned in a previous post in this thread
that it was also possible to print values of PL/I variables
along with the traceback.
>> One published account (D. J. Kewley, "A Comparison between
>> Pascal, FORTRAN, and PL/I", ACJ, Feb. 1981, pp 27-8)
>> claimed that Pascal was faster than PL/I,
>> and the author produced timing figures to "prove" it.
>> The author failed to specify the PL/I option "REORDER" that would
>> permit optimisation to take place. Furthermore, he compared
>> one Pascal program performing complex arithmetic, with a PL/I
>> program doing the same. Not supporting complex arithmetic,
>> the Pascal program simulated those operations. When the PL/I
>> version was rewritten using complex arithmetic,
>> and with the REORDER option on the PROCEDURE statement,
>> a 47% increase in speed was obtained. (see my refutation, ACJ,
>> Aug. 1981, p. 107)
>
> (snip)
>
>> FORTRAN programmers tended to mimic FORTRAN style in
>> their PL/I code that was inappropriate in PL/I.
>> That caused PL/I programs to run slower than they should have.
>
> (snip)
>>> In some cases you can do that, but it isn't so easy in a subroutine,
>>> when the array might not be contiguous.
>
>> IIRC, in IBM's F-compiler, elements in arrays were contiguous,
>> even in subroutines. [Even were that not so, a single loop suffices.]
>
> PL/I allows array cross sections that can be non-contiguous.
The elements in a cross-section are contiguous for rows,
but not for columns (in a matrix). However, the elements are
separated from each other by a constant amount, so a single loop
suffices (as you would expect, because only one subscript varies).
> I believe some Fortran compilers now generate a test for a contiguous
> array and then different code to process the two cases. As far as I
> know, no PL/I compilers do that. (Then again, I haven't looked that
> carefully.)
>>> Also, since array operations were added to Fortran, they are one of
>>> the biggest causes for complaints about slow programs.
>
> (snip)
>
>>> One reason is that people sometimes write array expressions that
>>> require much more computation than they would written as loops.
>
>> That's their choice.
>
> That is true. Still, people seem to like doing it.
>
>>> PL/I only supports call-by-descriptor for arrays and strings.
>>> That is good, but has more overhead.
>
>> Than what? Not having descriptors?
>> Descriptors are necessary to deal with dynamic arrays.
>> Without descriptors, it would be back to pre-Fortran 90,
>> where only the address of the first element of an array was
>> typically passed.
>
> The generated code to process assumed size arrays is often faster than
> assumed shape. In many cases it is a worthwhile tradeoff on today's
> machines.
I think that you will find that there's little if any difference.
> (snip)
>
>>>> [Considering what a rush job PL/I was, it's a remarkably good
>>>> language. -John]
>
>>> I didn't know about that until recently when I bought "History
>>> of Programming Languages", edited by Richard L. Wexelblat, which
>>> covers the timeline for the PL/I language specification. (Writing
>>> the actual compiler was a separate timeline.)
>
>>> As well as I can tell, it was October 1963 that the decision to
>>> design a new language, instead of going to Fortran VI, was final,
>>> and the group formed to write the specification. They were told
>>> that it needed to be done by December 1963. In December, it was
>>> extended to January 1964, and then slipped to February.
>
>> That those early dates slipped is irrelevant, as the S/360 wasn't
>> even announced until 1964. In any case, a slippage by a few months
>> was unimportant.
>
> From that description, they had four months. With the earlier
> deadline, they would have had to have most of it done in two months.
> For a language as complicated as PL/I, that isn't much time at all.
Two months was probably expecting a bit much, but recall that
I said that the decisions of what to include and omit would have
already been made. I suspect that the work required at that time
(October) would have been to decide on an appropriate syntax.
> (snip)
>
>> [The code fron PL/I F was comparable to Fortran G, but much worse than
>> Fortran H. The PL/I optimizing compiler's code was better, but still
>> not as good as Fortran H and its descendants. -John]
>
> As long as you don't use too many fancy PL/I features, that should be
> true. Of course you didn't have those features in Fortran IV. There is
> a tendency to try out new features of a new language, though.
> [I got some astonishingly awful code when I tried to use an array of
> 12 bit fields.
BIT strings usually need to be ALIGNED for best performance.
[Didn't help -John]
> As I recall, the code converted the fields to decimal
> and back, for reasons I could not begin to guess.
I can't imagine why that would have been done. The usual conversion
for BIT strings is to BINARY. In point of fact, any conversion to
arithmetic from BIT is to BINARY first, and them to any other type, if
required.
> Also, looking at the
> manuals in Bitsavers, they were still adding features and
> optimizations to PL/I F at least as late as 1968. -John]
As IBM has been doing to their current Windows PL/I compiler.
Return to the
comp.compilers page.
Search the
comp.compilers archives again.