Related articles |
---|
[7 earlier articles] |
Re: Prediction of local code modifications preston.briggs@gmail.com (preston.briggs@gmail.com) (2008-04-01) |
Re: Prediction of local code modifications max@gustavus.edu (Max Hailperin) (2008-04-02) |
Re: Prediction of local code modifications cfc@shell01.TheWorld.com (Chris F Clark) (2008-04-02) |
Re: Prediction of local code modifications gah@ugcs.caltech.edu (glen herrmannsfeldt) (2008-04-03) |
Re: Prediction of local code modifications max@gustavus.edu (Max Hailperin) (2008-04-03) |
Re: Prediction of local code modifications plfriko@yahoo.de (Tim Frink) (2008-04-03) |
Re: Prediction of local code modifications find@my.address.elsewhere (Matthias Blume) (2008-04-04) |
Re: Prediction of local code modifications gneuner2@comcast.net (George Neuner) (2008-04-04) |
Re: Prediction of local code modifications cfc@shell01.TheWorld.com (Chris F Clark) (2008-04-04) |
Re: Prediction of local code modifications cfc@shell01.TheWorld.com (Chris F Clark) (2008-04-05) |
Re: Prediction of local code modifications mayan@bestweb.net (Mayan Moudgill) (2008-04-05) |
Re: Prediction of local code modifications plfriko@yahoo.de (Tim Frink) (2008-04-08) |
Re: Prediction of local code modifications gneuner@gis.net (gneuner) (2008-04-19) |
From: | Matthias Blume <find@my.address.elsewhere> |
Newsgroups: | comp.compilers |
Date: | Fri, 04 Apr 2008 00:21:38 -0500 |
Organization: | private |
References: | 08-03-105 08-03-109 08-04-003 08-04-009 |
Keywords: | code, optimize |
Posted-Date: | 04 Apr 2008 12:12:13 EDT |
Chris F Clark <cfc@shell01.TheWorld.com> writes:
> Thus, dynamic programming is often a backtracking approach, you pick a
> solution for each of the problems in some order (often a greedy
> algorithm is used to come up with the initial solution) and compute
> the final score. Then, you revisit the items in the opposite order
> you originally picked them and try a different value for each one,
> until you have the best result. It is useful to think of the choices
> as a tree, where each choice made constrains the other subsequent
> choices (i.e. when you put one block in a regions of memory, you are
> then constrained not to put another block in that same memory), and
> you are visiting the tree of all possible choices in an organized
> fashion. Now, as I recall (and I haven't done any significant dynamic
> programming recently), there usually is a method to prune unfruitful
> subtree explorations, i.e. a way to determine if changing the value of
> some value cannot possibly lead to a better solution than the one that
> has already been calculated, but that may or may not be a required
> feature of the method.
I think you are confusing dynamic programming with branch-and-bound
methods. (What you are describing sounds more like branch-and-bound.)
Dynamic programming works for problems where there is extensive
overlap between sub-problems, i.e., where the number of unique
sub-problems is much smaller than the size of the naive search tree.
(In other words, a naive search would visit many sub-problems over and
over again.) One can turn such a naive search into an efficient one
by caching the results for sub-problems that have already been solved
in a table. A more efficient way of doing the same thing is to fill
this table bottom-up -- and that is what dynamic programming does.
> It is worth noting, that because of the required backtracking,
> dynamic programming solutions usually grow exponentially with input
> problem size.
No, they don't. Dynamic programming problems run in time that is
roughly O(p times s) where p is the number of unique sub-problems and
where s is the "size" of each sub-problem, i.e., the time required to
solve it given the solutions to its own sub-problems. For many
typical problems, dynamic programming yields polynomial-time
algorithms.
Branch-and-bound, on the other hand, does usually result in worst-case
exponential-time algorithms.
Matthias
Return to the
comp.compilers page.
Search the
comp.compilers archives again.