Re: any performance profiling tools ??

Jason Patterson <jason@reflections.com.au>
3 Oct 1997 12:19:57 -0400

          From comp.compilers

Related articles
any performance profiling tools ?? chunghsu@george.rutgers.edu (1997-09-23)
Re: any performance profiling tools ?? dccoote@werple.mira.net.au (1997-09-28)
Re: any performance profiling tools ?? everhart@gce.com (Glenn Everhart) (1997-09-30)
Re: any performance profiling tools ?? cfc@world.std.com (1997-10-01)
Re: any performance profiling tools ?? steve@blighty.com (1997-10-01)
Re: any performance profiling tools ?? ok@cs.rmit.edu.au (1997-10-02)
Re: any performance profiling tools ?? sanjay@src.dec.com (1997-10-02)
Re: any performance profiling tools ?? jason@reflections.com.au (Jason Patterson) (1997-10-03)
| List of all articles for this month |
From: Jason Patterson <jason@reflections.com.au>
Newsgroups: comp.compilers,comp.arch,comp.benchmarks
Date: 3 Oct 1997 12:19:57 -0400
Organization: Digital Reflections
References: 97-09-084 97-09-119 97-09-126 97-10-010
Keywords: performance, architecture

Steve Atkins wrote:
> Nothing at all to do with SPARC though, which I think is where
> this started. Anyone from Sun out there?


Apologies for joining this thread so late...


Why not use Sun's Shade simulator. It's pretty fast, not much slower
than instrumented code, because it uses dynamic compilation (JIT).


Shade is a simulator, so the executables being run don't need to be
modified/instrumented, which can be important: adding instrumentation
code using an ATOM/QPT style tool effects things like I-cache results.


To quote http://www.sun.com/microelectronics/shade/ ...


    Shade is a framework for developing instruction tracing and simulation
    tools. Shade simulates the execution of an application and provides a
    programming interface that allows the user to collect arbitrary data
    while the application runs. The interface makes it easy to write tools
    that collect and process address trace information and other related
    data. The Shade kit also contains pre-written tools to perform common
    tasks.


In practice you simply write data collection code appropriate for your
requirements, using Shade's library to do the bulk of the work and all
the JIT stuff, then run real SPARC executables on your new simulator to
gather your results. Simulation of a cache hierarchy is one of the pre-
written tools that's bundled with Shade (and is very easy to write for
yourself if you need something slightly different).


Of course, what you *really* want is intelligent access to the CPU's
performance counters (assuming it has them). This would give you full
performance and would not require much effort at all from you.


Apparently this is how Digital's Continuous Profiling Infrastructure
works, and since its done within the kernel you avoid the side effects
of adding instrumentation code to the applications. This looks ideal
for your type of usage, although Shade can do a lot more (eg simulate
future machine designs).




JASON PATTERSON
  jason@reflections.com.au http://www.reflections.com.au/~jason
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.