Related articles |
---|
Is There Still a Need for "Turbo" Compilers? jlforrest@berkeley.edu (Jon Forrest) (2008-03-17) |
Re: Is There Still a Need for "Turbo" Compilers? DrDiettrich1@aol.com (Hans-Peter Diettrich) (2008-03-18) |
Re: Is There Still a Need for "Turbo" Compilers? nmh@t3x.org (Nils M Holm) (2008-03-18) |
Re: Is There Still a Need for "Turbo" Compilers? marcov@stack.nl (Marco van de Voort) (2008-03-18) |
Re: Is There Still a Need for "Turbo" Compilers? haberg_20080313@math.su.se (Hans Aberg) (2008-03-18) |
Re: Is There Still a Need for "Turbo" Compilers? jacob@nospam.org (jacob navia) (2008-03-18) |
Re: Is There Still a Need for "Turbo" Compilers? dot@dotat.at (Tony Finch) (2008-03-18) |
Re: Is There Still a Need for "Turbo" Compilers? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2008-03-18) |
[2 later articles] |
From: | Hans-Peter Diettrich <DrDiettrich1@aol.com> |
Newsgroups: | comp.compilers |
Date: | Tue, 18 Mar 2008 06:44:50 +0100 |
Organization: | Compilers Central |
References: | 08-03-067 |
Keywords: | performance, history, comment |
Posted-Date: | 18 Mar 2008 09:06:37 EDT |
Jon Forrest wrote:
> Those of us who have been around a while still remember the miracle
> of Borland's "Turbo" languages. They were so much faster than
> anything else available at the time that they made the compile/link
> step take a negligible amount of time. Given how slow I/O was in
> those days, this was a very welcome development.
TurboC still was much slower than TurboPascal, due to the "nature" of
the languages.
> Turbo languages sacrifice code optimization for quick build time,
> and are more suited for development and debug stages that final code
> production. They also avoid I/O by keeping the output of compiler
> stages in memory.
A RAD environment still will try to keep all (compiled) information in
memory, it's a matter of the integration of the compiler into the IDE.
And a matter of the compiler options, of course, where a syntax check
or debug build must not be optimized, in contrast to a release build.
> However, these days there aren't any "Turbo" language
> implementations that I'm aware of. Is this because modern hardware
> is so fast that it isn't worth developing compilers and linkers
> optimized for speed? By using proper command line arguments to gcc,
> can you get quasi-Turbo performance compared to using arguments that
> result in highly-optimized code?
In the new CodeGear BDS it's not the compiler, that makes long
turnaround times, it's the framework with all the plugins
(ErrorInsight...). A very different situation from stand-alone
compilers...
> John Ousterhout, the inventor of Tcl/Tk, is the founder of a company
> that produces software that optimizes parallelizing of the commands
> in makefiles, which is one way to speed up the building of large
> software packages. But, this doesn't do anything to the compilers
> themselves.
Right, the compilers nowadays are as fast as possible, but instead of
introducing parallelism into the compiler, a modern (multi-core...)
CPU also can run multiple compilers at a time. The bottleneck then
will be the disk I/O, which (in detail for C header files) can be
"widened" by a MRU file cache in the OS, as implemented at least in
Windows.
In 1993 I "tested" the Win3.1 multitasking capabilities, running up to
7 programs at the same time, for reading data from a tape, converting
the raw data, and writing the results to an MO drive. I played the
operator, inserting the next tape or MO cartridge, starting the
conversion programs for the new input, and reorganizing the
directories and filenames prior to writing them to the MO drive. While
all the drives were busy most of the time, I had enough time left for
bookkeeping and playing a patience in parallel, and I'm pretty sure
that this wouldn't change nowadays :-)
> But, how fast could a compiler be given today's vast amount of virtual
> memory and multiple-core CPUs?
As John stated, nowadays JIT compilers are as powerful as the old
compilers in the Turbo era have been. Here the disk I/O is minimized
by an appropriate precompilation, that (hopefully) eliminates the need
for accessing many disk based header files during JIT compilation. The
JIT compilers also can run in parallel to the application itself,
making better use of multi-core CPUs.
Just my 0.02$ ;-)
DoDi
[The main reasons that the Turbo compilers were fast is that they buffered
most of the file I/O in RAM, including tokenized versions of header files.
Not sure how that would work on today's VM systems where the dividing line
between RAM and disk is so blurry. Doesn't GCC already start by mapping in
the whole source file? -John]
Return to the
comp.compilers page.
Search the
comp.compilers archives again.