Wed, 04 Mar 2009 16:40:34 +0100

Related articles |
---|

Superscalars and instruction scheduling pertti.kellomaki@tut.fi (Pertti Kellomaki) (2008-11-13) |

Re: Superscalars and instruction scheduling rnsanchez@wait4.org (Ricardo Nabinger Sanchez) (2008-11-17) |

Re: Superscalars and instruction scheduling SidTouati@inria.fr (Sid Touati) (2008-11-18) |

Re: Superscalars and instruction scheduling mayan@bestweb.net (Mayan Moudgill) (2009-02-28) |

Re: Superscalars and instruction scheduling SidTouati@inria.fr (Touati Sid) (2009-03-04) |

From: | Touati Sid <SidTouati@inria.fr> |

Newsgroups: | comp.compilers |

Date: | Wed, 04 Mar 2009 16:40:34 +0100 |

Organization: | Universite de Versailles Saint-Quentin-en-Yvelines |

References: | 08-11-053 08-11-077 08-11-084 09-03-008 |

Keywords: | architecture, performance |

Posted-Date: | 05 Mar 2009 06:03:59 EST |

Mayan Moudgill a icrit :

*> Not exactly true: see "Validation of Turandot, a fast processor model*

*> for microarchitecture exploration", IPCCC 1999.*

*>*

*> The Turandot simulator was within 5% for the Power-4*

*>*

Unfortunately, focusing only on the error ratio is not sufficient to

guarantee that a simulator is precise ! This is a general mistake of

architecture simulation. Indeed, few people care about the precision of

a simulator, because architectures change quickly. Even if we need

precision, few people care about a statistical guarantee. People seem

happy if they read that the error ratio of a simulator is "low" enough;

But nobody can guarantee that this error ratio is reproducible in practice.

For instance saying that a simulator has an error ratio within 5%, 10 %

or 1% of a real architecture has no sense because:

- This error ratio depends on the chosen benchmarks, and on the chosen

data input. We can chose benchmarks and data that minimise the error

ratio, or we can design a simulator for a chosen benchmark initially. We

have no way to select a representative set of programs and data input,

simply because we do not know how to qualify programs. If you focus on

public benchmarks like spec, etc., you will highly end up with

experimental data that do not reflect real world programs.

- The measured error ratio depends on the experimental methodology, on

the experimental environment. Many hidden factors alter the real

performance of a program, and until now, we do not capture or we do not

know how to exactly measure these hidden influencing factors.

- If the error is not well distributed around a "mean", then the

simulator has a bias, and the error ratio has no sense.

- Sometimes, the error ratio cannot be measured because no real machine

exists, etc.

The book of Rak Jain explains some of these points clearly.

Anyway, I trust a simulator of a Turing Machine, or a simulator of a

sequential, non pipelined, no cached processor. Outside this, I never

trust the precision of a superscalar simulator, nor the reported error

ratio. Anyway, this does not prohibit us to use them because we do not

have the choice.

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.