Related articles |
---|
Data structures for efficient overload resolution stephenhorne100@aol.com (Steve Horne) (2008-03-11) |
Re: Data structures for efficient overload resolution barry.j.kelly@gmail.com (Barry Kelly) (2008-03-14) |
Re: Data structures for efficient overload resolution stephenhorne100@aol.com (Steve Horne) (2008-03-15) |
Re: Data structures for efficient overload resolution barry.j.kelly@gmail.com (Barry Kelly) (2008-03-17) |
Re: Data structures for efficient overload resolution cdodd@acm.org (Chris Dodd) (2008-03-17) |
Re: Data structures for efficient overload resolution stephenhorne100@aol.com (Steve Horne) (2008-03-18) |
Re: Data structures for efficient overload resolution stephenhorne100@aol.com (Steve Horne) (2008-03-18) |
Re: Data structures for efficient overload resolution gneuner2@comcast.net (George Neuner) (2008-03-18) |
Re: Data structures for efficient overload resolution stephenhorne100@aol.com (Steve Horne) (2008-03-22) |
From: | Steve Horne <stephenhorne100@aol.com> |
Newsgroups: | comp.compilers |
Date: | Sat, 15 Mar 2008 13:36:19 -0700 (PDT) |
Organization: | Compilers Central |
References: | 08-03-049 08-03-057 |
Keywords: | types |
Posted-Date: | 17 Mar 2008 00:07:00 EDT |
On Mar 14, 2:42 pm, Barry Kelly <barry.j.ke...@gmail.com> wrote:
> There may be more argument types than there are
> parameter types (e.g. the 'nil' or 'null' type, in such languages that
> support it, usually cannot be a parameter type).
Dealing with this is one of the reasons I wrote my utility rather than
just using treecc. It's not too much of a hassle. I support three
cases. The default is that null parameters trigger a standard error
handler (normally an exception throw). If the parameter is flagged as
optional, null parameters are valid and handled either using a
specific null-handling implementation or by an implementation flagged
as able to handle both null and non-null values.
Strictly, the last two cases are not separate - even a null-value-only
handling implementation may only cover a subset of cases (e.g. due to
other parameters) and the resolver only requires that every case must
be unambiguous - not that they must all handle nulls the same way.
Before continuing, I should say that the basic idea that prompted me
to post earlier is one that I'm dropping as just far too silly to
persue even out of curiosity. Configurable input handling idea can
certainly be considered a multiple-dispatch problem, but I don't need
to change my 'class heirarchies' at run time. I might have multiple
viewports, but for input translation I only care about the type of
viewport, for instance, where all the types will have been decided at
compile time. The input mappings can be changed at run-time, but only
during special configuration-changing operations. That means that
multiple dispatch (as opposed to compile-time overload resolution) is
an even closer match - when the configuration is applied, I can
precalculate a decision tree just as I do for my existing utility.
Even so, I'm still interested in more efficient overload resolution
strategies, so...
> Overload resolution is then reduced to creating and maintaining a list
> of the "max" candidates, i.e. candidates which compare greater than any
> previously seen candidate and equal to one another. If at the end of the
> search, there are multiple candidates, then the result is ambiguous; no
> candidates, not compatible; a single candidate, job done.
>
> If the overload type ordering is sane, then this process should be
> linear in the number of candidates (and arguments). By sane, I mean that
> any candidate which compares greater than any previous candidate should
> also compare greater than any candidate equal to the previous candidate,
> preventing a potential quadratic mini-explosion.
I'm a little confused here.
When my naive approach resolves one particular call, it checks every
single candidate. It looks linear at a glance, but it's not. An
apparent ambiguity can arise early in the search only to be resolved
later. For example, given a hierarchy with parent class p and child
class c, consider the following sequence of candidates...
1 (p, c)
2 (c, p)
3 (c, c)
When resolving for the call (c, c), candidates 1 and 2 will both
appear valid with neither being more specific than the other.
Therefore, when assessing candidate 3, it must be compared with both
of these two earlier candidates. Only then can these candidates be
eliminated. Comparisons must be done both with the call being resolved
and with those previously identified candidates that have yet to be
eliminated. Since the number of previously identified candidates can
grow linearly with the number of candidates checked so far, the
overall worst-case complexity is quadratic.
The list of previously identified not-yet-eliminated candidates looks
like your max-candidates list. Is that right? I'm confused because I
see quadratic time in something you're describing it as linear unless
I do something insane. Perhaps this means that I've got something
insane locked into my mindset that I can't see past?
In my utility, the explosion issue is not this quadratic thing, but
because I'm evaluating every case (every possible call) prior to
generating a decision table. The number of cases grows exponentially
with the number of parameters, of course, and polynomially with the
number of possible run-time types for each parameter. And even that's
just the detonator - the real bang happens when the rule induction
tries to make sense of all those cases in order to build the decision
tree.
I'm now certain that my visions of adapting an ordered-key data
structure to the job were based on confusing some half-remembered
distinct meanings of the term 'partially ordered', and the idea of
combining that with R-trees is just nonsense. I'm still thinking
through some ideas based on organising the set of candidates in
advance to avoid having to check them all for each call resolved, but
I think I'll make sure the ideas are coherent before discussing
them ;-)
Return to the
comp.compilers page.
Search the
comp.compilers archives again.