Wed, 02 Apr 2008 11:24:17 -0400

Related articles |
---|

[3 earlier articles] |

Re: Prediction of local code modifications max@gustavus.edu (Max Hailperin) (2008-03-28) |

Re: Prediction of local code modifications gah@ugcs.caltech.edu (glen herrmannsfeldt) (2008-03-29) |

Re: Prediction of local code modifications preston.briggs@gmail.com (preston.briggs@gmail.com) (2008-03-29) |

Re: Prediction of local code modifications plfriko@yahoo.de (Tim Frink) (2008-04-01) |

Re: Prediction of local code modifications preston.briggs@gmail.com (preston.briggs@gmail.com) (2008-04-01) |

Re: Prediction of local code modifications max@gustavus.edu (Max Hailperin) (2008-04-02) |

Re: Prediction of local code modifications cfc@shell01.TheWorld.com (Chris F Clark) (2008-04-02) |

Re: Prediction of local code modifications gah@ugcs.caltech.edu (glen herrmannsfeldt) (2008-04-03) |

Re: Prediction of local code modifications max@gustavus.edu (Max Hailperin) (2008-04-03) |

Re: Prediction of local code modifications plfriko@yahoo.de (Tim Frink) (2008-04-03) |

Re: Prediction of local code modifications find@my.address.elsewhere (Matthias Blume) (2008-04-04) |

Re: Prediction of local code modifications gneuner2@comcast.net (George Neuner) (2008-04-04) |

Re: Prediction of local code modifications cfc@shell01.TheWorld.com (Chris F Clark) (2008-04-04) |

[4 later articles] |

From: | Chris F Clark <cfc@shell01.TheWorld.com> |

Newsgroups: | comp.compilers |

Date: | Wed, 02 Apr 2008 11:24:17 -0400 |

Organization: | The World Public Access UNIX, Brookline, MA |

References: | 08-03-105 08-03-109 08-04-003 |

Keywords: | code, optimize |

Posted-Date: | 02 Apr 2008 22:54:48 EDT |

Tim Frink <plfriko@yahoo.de> writes:

*> I'm not sure if dynamic programming is an approach that I can apply to*

*> my problem. When I understand the idea of dynamic programming*

*> correctly, it exploits the idea of "overlapping subproblems" and*

*> "memoization", i.e. it is assumed that the problem can be divided into*

*> independent subproblems which can be solved separately and then their*

*> optimal solution can be used to construct the global optimal solution.*

*>*

*> For my problem with the alignment I could divide the code into smaller*

*> subproblems where I could try to find an optimal local*

*> solution. However, these subproblems are not independent. When I move*

*> some code locally in one place (let's say that's the region of the*

*> first subproblem), then this might possibly also influence some*

*> following code in another region that I consider as a further*

*> subproblem. Thus, calculating separate optimal local solutions and*

*> them combine them will not work for me.*

Actually, that is exactly the kind of problem dynamic programming is

designed to solve. The criteria for dynamic programming is only that

if you have already picked all the other problems with the value that

leads to the optimal solution, then you can pick the value to the last

problem that leads to the optimal solution (and so on working

backwards). Dynamic programming does not require the choices to be

independent (and in fact is designed to solve the case where the

choices are interdependent). If the choices are independent, simpler

solutions can be used.

Thus, dynamic programming is often a backtracking approach, you pick a

solution for each of the problems in some order (often a greedy

algorithm is used to come up with the initial solution) and compute

the final score. Then, you revisit the items in the opposite order

you originally picked them and try a different value for each one,

until you have the best result. It is useful to think of the choices

as a tree, where each choice made constrains the other subsequent

choices (i.e. when you put one block in a regions of memory, you are

then constrained not to put another block in that same memory), and

you are visiting the tree of all possible choices in an organized

fashion. Now, as I recall (and I haven't done any significant dynamic

programming recently), there usually is a method to prune unfruitful

subtree explorations, i.e. a way to determine if changing the value of

some value cannot possibly lead to a better solution than the one that

has already been calculated, but that may or may not be a required

feature of the method.

It is worth noting, that because of the required backtracking, dynamic

programming solutions usually grow exponentially with input problem

size. That is, if your problem adds one more binary decision to the

previous problem, the new problem takes roughly twice as long to

solve, because you have doubled the size of tree you have to explore

to find the correct solution.

Hope this helps,

-Chris

******************************************************************************

Chris Clark Internet: christopher.f.clark@compiler-resources.com

Compiler Resources, Inc. or: compres@world.std.com

23 Bailey Rd Web Site: http://world.std.com/~compres

Berlin, MA 01503 voice: (508) 435-5016

USA fax: (978) 838-0263 (24 hours)

------------------------------------------------------------------------------

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.