|XML Parsers (Push and Pull) firstname.lastname@example.org (2002-01-18)|
|Re: XML Parsers (Push and Pull) email@example.com (Bill Rayer) (2002-01-24)|
|Re: XML Parsers (Push and Pull) RLWatkins@CompuServe.Com (R. L. Watkins) (2002-01-24)|
|Re: XML Parsers (Push and Pull) firstname.lastname@example.org (2002-01-28)|
|Re: XML Parsers (Push and Pull) email@example.com (2002-01-28)|
|Re: XML Parsers (Push and Pull) firstname.lastname@example.org (2002-02-06)|
|Re: XML Parsers (Push and Pull) email@example.com (2002-02-16)|
|Re: XML Parsers (Push and Pull) firstname.lastname@example.org (2002-02-28)|
|Date:||28 Feb 2002 00:14:22 -0500|
|Posted-Date:||28 Feb 2002 00:14:22 EST|
A small extension can be made to the notion of co-routines as an idea
distinct from notions of push or pull.
In compilation technology there are designed failure points. The
concept being that very typically input is not perfect, we must
anticipate that, and respond with error message and/or halt of the
The critical issue is how much gets halted when a failure is
detected. For simplicity, say you have a compilation sequence
represented by a command line, such as
compile xml1 xml2 xml3
If we are in the midst of xml2, having succeeded in interpreting (and
even deploying a response to) xml1, should the xml2 problem cause us
to refuse to go on to xml3 or even to gather up and halt any forward
implications from xml1.
If these xml_ compilation units are all distinct files (pages) then
maybe they are independent of one another. On the other hand if the
compilation is invoked with a notion that they are all together either
in sequence or in parallel but related, then a generalized
all-fall-down rule would apply.
All-fall-down is fairly easy to deploy in push or pull system
design. Selective isolation of the error is hard in both push and
pull. Co-routine system design can provide some additional flexibility
in isolating the implication of error processing.
The essential concept is that a control signal is distinct from the
data content (compilation unit content).
This kind of distinction can be applied to the externally
manifestation of separate compilations units; and _potentially_ to the
internal sub units (xml text or not).
In OS terminology the scanners and parsers become daemons (although
that is just conceptual). When a compilation fails, the daemons do
_not_ fall down. By extension the instantiation of a processor of a
compilation unit can stay up even if an individual element fails.
So, generally, separating the control flow from the data flow
(parsable text as data flow) in design efforts can afford programming
solution opportunities that are characteristically different from push
or pull design possibilities.
So, ... the idea would be that the scanner, the parser, and the
responders (intermediate code emitters, final code emitters, screen
presenters, database handlers) are all at the same level with no
notion at all of whose on top. They _send_messages_ to one
another. Messages travers atleast two pathways, data path and control
Message handling implies a distinct function to manage the taffic, but
it is not necessary to have either a network or an object repository
to do that. The main ideas are to isolate control flow from data flow,
and to separate the large scale notion of compiler-execution stability
from the usually less significant notion of error processing.
Return to the
Search the comp.compilers archives again.