From: | Martin Ward <martin@gkc.org.uk> |
Newsgroups: | comp.compilers |
Date: | Fri, 6 Apr 2018 16:09:33 +0100 |
Organization: | Compilers Central |
References: | <49854345-f940-e82a-5c35-35078c4189d5@gkc.org.uk> 18-03-103 18-03-042 18-03-047 18-03-075 18-03-079 18-03-101 |
Injection-Info: | gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="44803"; mail-complaints-to="abuse@iecc.com" |
Keywords: | history, design |
Posted-Date: | 06 Apr 2018 11:54:27 EDT |
On 30/03/18 15:20, Anton Ertl wrote:
>> The theory is that "the attitude of the Algol 60 designers towards
>> language design is what led to these innovations appearing".
>> Clearly, this "attitude" was present before, during and after
>> the development of Algol 60.
> Ok, so this theory cannot even be verified of falsified for the
> features that came before Algol 60.
John asked us to speculate what might have happened differently,
not produce empirically verifiable theories :-)
I came across this quote the other day, which suggests that the change
in attitude had already started by 1979:
"The call-by-name mechanism is so expensive that modern languages
have mostly dropped it in favour of the semantically different,
but operationally more efficient, call-by-reference"
--"Understanding and Writing Compilers", Richard Bornat, Macmillan 1979.
Note that the
>> It should be easy to falsify the theory: where are the new language
>> features that have been invented in the last, say, twenty years?
>
> I mentioned some things in the grandfather posting. Do you want me to
> repeat them?
You mentioned generics and lambdas, both of which you acknowledge
have been around for a long time: lambdas have been around
since the original Lisp!
You also mentioned combining static type checking with explicit
memory management: again two features which have been around
for decades. One might ask: why earth we still need explicit memory
management in this day and age? Because it is more efficient
to implement? Doesn't that prove what I have been saying all along?
Re "empirical software engineering", I apologise that I mis-remembered
the term used and sent you on a wild goose chase. :-(
"Search based software engineering" might be a more accurate term.
Two examples are:
"Observation Based" or "Language Independent" program slicing
involves deleting random chunks of code and then compiling
and re-testing the result (if possible) to see if it is
a "valid" slice (where a "valid" slice is defined as
"one which passes the test suite"):
http://coinse.kaist.ac.kr/publications/pdfs/Binkley2014ud.pdf
"Automated Software Transplantation" involves copying random blocks
of code from one program to another and testing if the resulting
program still "works" (ie passes its test suite) and also
contains some functionality from the other program
(eg it can now handle a new file format).
http://crest.cs.ucl.ac.uk/autotransplantation/downloads/autotransplantation.pdf
https://motherboard.vice.com/en_us/article/bmj9g4/muscalpel-is-an-algorithmic-code-transplantation
> It may not be the direction
> you favour, but it would be something new, wouldn't it?
It is like introducing alchemy into the chemistry department:
let's forget all the theory and all the maths and just smush
random chemicals together and see what happens!
Yes, it's something new, and no, it's not a direction I favour.
>> Where are the powerful new languages which make Haskell
>> look like Dartmouth BASIC?
>
> You mean a language that is even harder to learn and even less
> practical than Haskell?-)
Modern popular languages are neither powerful nor easy to learn.
The latest C standard is over 700 pages (even the Haskell Language
Report is only 329 pages!). In the pursuit of efficiency over
everything, C leaves so many operations "undefined" (there are
around 199 examples of undefined behaviour) that simply
writing a piece of code to carry out twos-complement arithmetic
(in a fully standards-complient manner) is a challenge
to tax the most experienced programmers: and probably compiles
into very inefficient object code even on a processor which has
native twos complement arithmetic operations!
https://wiki.sei.cmu.edu/confluence/display/c/INT32-C.+Ensure+that+operations+on+signed+integers+do+not+result+in+overflow
For C++ there is the SafeInt class which tries to do this:
but examples of undefined behaviour were detected in that library,
which caused it to give incorrect results on some compilers:
http://www.cs.utah.edu/~regehr/papers/overflow12.pdf
One effect of the increases in memory size and CPU speed over
the decades is that it allows programmers to write much larger
programs and implement solutions to much more complex problems.
This makes mastering complexity far more important.
A language which is an order of magnitude more powerful
would mean that programmers need to write and comprehend
an order of magnitude less code: and each individual programmer
or small team could work on problems which are an order of
magnitude more complex. If this language takes an order of magnitude
longer to learn and fully understand, then it is still worth it!
Unfortunately, this flies in the face of modern business practice
where "programmers" are a cheap resource to be hired and put to work
generating lines of code.
--
Martin
Dr Martin Ward | Email: martin@gkc.org.uk | http://www.gkc.org.uk
G.K.Chesterton site: http://www.gkc.org.uk/gkc | Erdos number: 4
Return to the
comp.compilers page.
Search the
comp.compilers archives again.