From: | BGB <cr88192@hotmail.com> |
Newsgroups: | comp.compilers |
Date: | Fri, 09 Mar 2012 21:14:36 -0700 |
Organization: | albasani.net |
References: | 12-03-012 12-03-017 |
Keywords: | history, design |
Posted-Date: | 09 Mar 2012 23:30:07 EST |
On 3/8/2012 6:02 PM, Ian Lance Taylor wrote:
> Rui Maciel<rui.maciel@gmail.com> writes:
>
>> - And here's the first itchy point: there appears to be no correlation
>> between the success of a programming language and its emergence in the form
>> of someone's doctoral or post-doctoral work. This bothers me a lot, as an
>> academic. It appears that deep thoughts, consistency, rigor and all other
>> things we value as scientists aren't that important for mass adoption of
>> programming languages.
>
> As a non-academic, I agree. None of those things matter very much to me
> when it comes to actually getting stuff done. They are not bad things
> to have, but they are not the things that matter.
much agreed.
I have also have also had some arguments revolving around "originality"
and "minimalism", both of which being goals I didn't find particularly
important, but some people take these fairly seriously.
like, what really is the problem if the language has a
conventional-looking syntax, and a potentially "not as simple or elegant
as it could be" implementation? most developers probably don't care too
much, otherwise C++ and C# probably wouldn't be nearly as heavily used
as they are currently, and one can infer that both are probably at least
doing *something* right.
much like things like "user defined syntax" and similar:
how is the typical programmer (or nearly anyone, for that matter) going
to value by having PEG integrated into the language syntax/parser?
I actually tend to think this idea is a misfeature.
a better idea IMO (for allowing DSL's, ...) is to allow an independent
parser with an independent (albeit presumably fixed) syntax. this could
mean a PEG-based parser or whatever which is deliberately invoked to
parse/evaluate the new syntax.
the above could be aided via the use of block-string syntax:
myParserEval("""
lots of code and stuff...
""");
or:
myParserEval(<[[
lots of code and stuff...
]]>);
I also don't feel this is a particularly severe limitation.
>> - So one pertinent question is: given that not much seems to have emerged
>> since 1979 (that's 30+ years!), is there still anything to innovate in
>> programming languages? Or have we reached the asymptotic plateau of
>> innovation in this area?
>
> As others have mentioned, there may be some good ideas to come in the
> area of safer and more efficient parallel programming. I like the CSP
> model, but perhaps there is something better. I personally think the
> model of threads, mutexes and condition variables has so far proven too
> difficult to use correctly for most programmers. That goes double for
> the atomic operations and barrier model. And functional and dataflow
> programming languages do not appear to have gotten much adoption in
> practice.
in my case, I am actually using a mix of threading and message passing.
the main way this differs from CSP is mostly that message
sending/receiving is typically asynchronous and non-blocking (and
accomplished via explicit message-channel objects).
mutexes are also available, in addition to the "synchronized(obj) { ...
}" syntax used by Java.
however, as-is, this area hasn't gotten as much use or development in my
case, as my use-cases tend to be better served by the use of an
event-driven programming style than by the use of explicit concurrency
features.
> I also think there is more room for thought about programming in the
> large. Many software shops these days are huge, producing programs that
> are far too large for anybody to keep in their heads. As a side-effect,
> many programmers spent a lot of time performing maintenance of various
> sorts. What can languages do to help? Refactoring is just the most
> obvious example in this space, and even there it is clear that some
> languages support refactoring far better than others. Other areas
> related to language are speed of development, dependency management,
> ease of debugging, modularity, ease of performance analysis, no doubt
> many more.
potentially.
in my case, I haven't really been designing things in terms of
particular objectives, but more in terms of what things seem interesting
to myself, or are being an annoyance, or could be helpful with the task
at hand.
so, I made what is mostly a scripting language, and ended up bolting on
features intended mostly for scale / writing "industrial strength" code,
much of which has ended up amounting mostly to syntax sugar (and/or some
amount of micro-optimizing).
for a while, I have been stuck in an "inter-JIT period" as my old JITs
basically fell into being non-functional (and I was stuck mostly with an
interpreter, albeit I did migrate from the use of a "big-switch" based
interpreter to the use of threaded code a while ago). most of this was
due to "bit rot" and a lot of this stuff not really being a huge
priority (my VM internals have changed significantly since the time the
last JIT still worked, and my prior JITs were very hackish and
inflexible, one getting hopelessly out of date, and the other being
seemingly nearly impossible to debug).
recently (as in, mostly last weekend) I got around to mostly
implementing a new JIT, but it is still a bit far from being usable or
complete (large chunks of the ISA are stubs, many cases are not
implemented, ...).
but, performance hasn't been a huge killer, and in many cases native
code was still being generated (even if the bulk of the execution/ISA is
still being handled by the interpreter).
(ironically, the natively generated code tends to use a larger slice of
the total execution time than the interpreter proper, at least according
to the profiler...).
however, it doesn't sound all that impressive to be like "yeah, the
language runs in an interpreter and is around 100x slower than native C
code..." (along with "yeah, most of my 3D engine is also written in
plain C, with the scripting language mostly used for misc stuff, like
scripting and eval and similar...").
but, much more impressive-sounding would be like "it has all of these
nifty dynamic features and operates nearly as fast as C", even if, at
this point, this claim would be a bit unrealistic.
side note:
the new JIT uses "function at a time" compilation, using a modified ABI
(similar to cdecl on x86 and the Win64 ABI on x86-64, but using EBX and
RBX for passing the VM context, with things like "this" and non-local
scope being accessed indirectly via this context), as well as some
amount of link-time code generation. technically, a given function only
holds its own locals and arguments, and depends on the VM context for
nearly everything else (the currently generated code is also a bit
naive, but I am resisting the urge to try to micro-optimize it at least
until the thing is fully implemented and working).
but, "scripting" and "interfacing with C" remain as its driving
use-cases, and "looking sort of like C" as a secondary use-case.
Return to the
comp.compilers page.
Search the
comp.compilers archives again.