Re: Have we reached the asymptotic plateau of innovation in programming language design?

Rock Brentwood <federation2005@netzero.com>
Sat, 17 Mar 2012 12:31:13 -0700 (PDT)

          From comp.compilers

Related articles
[18 earlier articles]
Re: Have we reached the asymptotic plateau of innovation in programmin robin51@dodo.com.au (robin) (2012-03-11)
Re: Have we reached the asymptotic plateau of innovation in programmin jthorn@astro.indiana-zebra.edu (Jonathan Thornburg \[remove -animal to reply\]) (2012-03-14)
Re: Have we reached the asymptotic plateau of innovation in programmin gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-03-14)
Re: Have we reached the asymptotic plateau of innovation in programmin torbenm@diku.dk (2012-03-14)
Re: Have we reached the asymptotic plateau of innovation in programmin torbenm@diku.dk (2012-03-14)
Re: Have we reached the asymptotic plateau of innovation in programmin cr88192@hotmail.com (BGB) (2012-03-15)
Re: Have we reached the asymptotic plateau of innovation in programmin federation2005@netzero.com (Rock Brentwood) (2012-03-17)
Re: Have we reached the asymptotic plateau of innovation in programmin cr88192@hotmail.com (BGB) (2012-03-18)
Re: Have we reached the asymptotic plateau of innovation in programmin mailbox@dmitry-kazakov.de (Dmitry A. Kazakov) (2012-03-18)
Re: Have we reached the asymptotic plateau of innovation in programmin genew@ocis.net (Gene Wirchenko) (2012-03-19)
Re: Have we reached the asymptotic plateau of innovation in programmin eijkhout@tacc.utexas.edu (2012-03-19)
Re: Have we reached the asymptotic plateau of innovation in programmin torbenm@diku.dk (2012-03-21)
Re: Have we reached the asymptotic plateau of innovation in programmin mailbox@dmitry-kazakov.de (Dmitry A. Kazakov) (2012-03-22)
[23 later articles]
| List of all articles for this month |

From: Rock Brentwood <federation2005@netzero.com>
Newsgroups: comp.compilers
Date: Sat, 17 Mar 2012 12:31:13 -0700 (PDT)
Organization: Compilers Central
References: 12-03-012
Keywords: syntax, design
Posted-Date: 18 Mar 2012 03:33:32 EDT

On Mar 7, 5:52 am, Rui Maciel <rui.mac...@gmail.com> wrote:
> Quotes:
> - But the truth of the matter is that ever since I finished my Ph.D. in the
> late 90s, and especially since I joined the ranks of Academia, I have been
> having a hard time convincing myself that research in PLs is a worthy
> endeavor.
> So, what are your views on this subject?
>
> Rui Maciel
> [Personally, I'd say there's been precious little new in programming
> languages since Simula gave us OOP in the late 1960s. In your responses,
> please remember this is comp.compilers, not comp.semicolon-placement.flame.
> -John]


The answer is simple: whenever and wherever you hear such a question
posed, no matter what the context or situation, no matter what the
issue, it is *always* a clear-cut red flag that you are on the cusp of
a major paradigm shift and that the older paradigm (and the older
generation along with it) has simply run its course. Equally clearly
is that during such times, when you start to see things as having been
exhausted, you're also dating yourself and are identifying yourself in
terms of which side of the paradigm boundary you reside on.


The best way to get a glimpse or proper understanding on just what the
key issues underlying and driving the paradigm shift are to look
specifically for all the issues that have been put under the rug
(frequently through a layer of defense-mechanisms that bury the issue
under "established" conventions, divert the issue on hand, or try to
pooh-pooh it as somehow "insiginificant") or issues that have been
left up in the attic as things that are too awkward to bring down into
the living room as furniture that nobody's really been able to get to
ft well.


In the present case, one issue is easy to see and it has driven some
recent changes in the latest standards for programming language (even
C) -- the near complete absence of a consensus academically-derived
well-established formalism (along the same lines as the Algol
standard) for language-level concurrency. Notwithstanding APL,
language-level concurrency is a facility that is nearly absent in the
core of "modern" languages -- only being provided (for instance) as a
layer of bureaucracy up top of a language lacking it at its core (thus
compounding the problem) in a language like C++.


There are two major paradigms in programming. One may be likened to
the composer who writes a score for a song or melody, or a camera
operator who records a video. It is present and dominant at the
desktop or application level (regardless of the type of OS hosting it
be it multitasking or not). This is what all Algol-derived languages
are built around. The differentiation between Procedural, Object-Based
(to use Lippman's term) and Object-Oriented does not change any of
this and is little more than a surface or cosmetic difference within
the two paradigms.


Which gets to the second paradigm: programming that may be likened to
digital cinematography or the full orchestration of a film background
score or symphony. This is the one dominant in embedded applications
-- but only when they are done right, as opposed to how they are
frequently done as the "giant control loop everything else decimated
into bits and pieces governed by explicit finite state machine" style
when the songwriter type tries to take on the task of being an
orchestral composer.


They are easy to do in machine-level languages and in explicitly
parallel languages like VHDL, but are inaccessible from the so-called
"high level languages" except via the extra layer of above-mentioned
bureaucracy of a "threads library" or some such deal. That latter
approach is also how POSIX 4 (now POSIX 1a) approached the matter,
making the whole paradigm into an API, as if it were somehow an
afterthought or addendum.


A concrete example of where this distinction proves critical: take a
look at the Windows API. Never mind the fact that it has over a couple
thousand system calls, the real issue here is that the OS is
explicitly message-passing. What you REALLY want is a program that
runs like this:
Item 1:
    function f() {
            ... do stuff ... wait for A ...
            while (B) {
                do stuff ... wait for B ... do stuff
            }
      }
      main routine:
            ... do stuff... f() ... do stuff ...


where the waits are places where the program is awaiting events or
messages. There is no support for this at the *language level* in the
core of any common language. Nor is it just something you can put in
as an add-on because of the subtle issues involving variable scopes
and lifetimes. That's why, for instance, instead of merely setting up
some kind of "threads" library, C now also has a "thread local" type
of variable and a new keyword to reflect this.


What Windows does is effectively force you to gut this structure turn
it inside-out, so that the boundaries of the routine at the
application level are: (exit) ... wait for A ... ((re-)enter). The re-
entry is then kicked in by a "callback function" (the Windows'
equivalent of what in embedded systems is referred to as an interrupt
handler or other event handler). The paradigm puts handlers out in
front, thereby effectively forcing you in the "single songwriter
trying to thread an orchestra into a giant control loop" mode.


Windows supports threads. But only as an afterthought (the threads
part of the API). It's written in C++ which does not have concurrency
in its core.


That means routine f() above would have an entry point for each wait
-- including the one INSIDE the control flow structure. That means, in
turn, the control flow structure is decimated and you see an explicit
finite state machine crop up.


I've seriously considered just throwing out all the old paradigm and
redoing a language from scratch that has native-level concurrency
built into its very core -- just to have a better and simpler way to
harness the full power of a system like Windows' API.


This is not just Windows. To give you another example: I was briefly
involved early on with a multi-user chat system that used the socket-
level API. The client source distributed to me and others for this was
2000 lines. It was programmed in the above-mentioned single-songwiter
paradigm. I created a server for it, using the same paradigm (before I
became acquainted and experienced with doing programming as an
orchestrator), and the server was about 2000 lines.


A few years later, I literally decimated the client and turned it into
a 99 line program, by redoing it as a concurrent program. With UNIX
the only good way to do that is to fork() the concurrent processes,
and use a combination of signal() and pipe() to get *both* the wait
and message passing, *without* having to tear up the control flow
structures and turn everything into a giant control loop. So, the
program was just 2 small routines working in parallel: one for
handling the user, the other for handling the server.


I shouldn't have to have been jumping through hoops just to compensate
for the lack of native-level concurrency. With Windows API, both the
programming *and* the understanding and explanation (as well as the
concept) of significant portions of the API will undergo a similar
level of decimation when rendered in the frame of mind of an
orchestrator, rather than in the frame of mind of a single score
songwriter. The latter is the frame of mind that the MSDN library
itself is written in and for (and obviously by).
[I've been saying we've run out of language ideas for 20 years.
Pretty slow cusp, if you ask me. And if there's a paradigm
shift, to what? - John]


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.