|Language design: signal/stream-oriented dialect firstname.lastname@example.org (2003-01-12)|
|Date:||12 Jan 2003 17:42:52 -0500|
|Posted-Date:||12 Jan 2003 17:42:52 EST|
Hi and thanks for plenty of tips posted,
What can I read to learn how to think properly about language design
from a signal processing point of view?
I'd like to extend or create a music/dsp high level language. Problem
is how to get the constructs right for the concepts involved. DSP
programming seems different from other domains, since the basic data
is (audio) signal streams, rather than single variables (or arbitrary
data collections (structs), or objects that react on events; even if
most signal sources could be O-O, their output isn't).
See digression below for more specifics what it *is* about.
Now, I'm not happy with any of the present scripts/lingos and would
like to fiddle with some new features - like configurable signal
feedback, built-in resampling (iterators parameterized on
interpolation method), user-specified signal types like fft-bins, more
rates (frame-rate of graphics or fft's), etc etc.
Anyway, the question is, are there any examples of language constructs
that I could look at, or is it just a matter of defining the right set
of C++ interfaces? How do I learn to think about the signal/stream
management from the language design point of view? (just knowing some
dsp and c/++ doesn't really seem to help here...)
--------- more on music dsp lingos ----------
Several dsp languages are based on the ancient (60's) "generator"
concept, generators being somewhat like c++ functors with internal state,
which produce new data on each call - the C rand() function is
a typical case. These sound generators are designed to be called
at regular/synchronized rates, to generate audio samples.
In this (Music-N) family of languages the generators are written in C
and precompiled; "the language" from user perspective mostly works
as a simple script to allow patching them together.
Initialization and production/patching is specified in the same
user-code line, and the basic "data types" concern update RATE
of signals rather than data size.
Maybe some language zoologist might like the weirdness factor of these,
I'd name SAOL for the most recent case - it's part of the MPEG-4 standard;
to get right onto the rationale for their language design.
Three variable/stream types in SAOL are declared thus:
ivar init_time_variable; // computed only at musical-event init time
ksig control_rate_variable; // lower-rate control signal
asig audio_rate_variable; // audio sample rate for final output
- standard math operations, the generators etc, will operate
at the highest rate of variables used in an expression.
and so on.
Return to the
Search the comp.compilers archives again.