Related articles |
---|
[3 earlier articles] |
Re: optimizing compilers for low power design Pidgeot18@verizon.com.invalid (2014-06-15) |
Re: optimizing compilers for low power design derek@_NOSPAM_knosof.co.uk (Derek M. Jones) (2014-06-16) |
Re: optimizing compilers for low power design walter@bytecraft.com (Walter Banks) (2014-06-16) |
Re: optimizing compilers for low power design gneuner2@comcast.net (George Neuner) (2014-06-18) |
Re: optimizing compilers for low power design andrewchamberss@gmail.com (2014-06-20) |
Re: optimizing compilers for low power design gah@ugcs.caltech.edu (glen herrmannsfeldt) (2014-06-20) |
Re: optimizing compilers for low power design ivan@ootbcomp.com (Ivan Godard) (2014-06-20) |
Re: optimizing compilers for low power design genew@telus.net (Gene Wirchenko) (2014-06-20) |
Re: optimizing compilers for low power design gneuner2@comcast.net (George Neuner) (2014-06-20) |
From: | Ivan Godard <ivan@ootbcomp.com> |
Newsgroups: | comp.compilers |
Date: | Fri, 20 Jun 2014 11:52:54 -0700 |
Organization: | A noiseless patient Spider |
References: | 14-06-003 14-06-004 14-06-008 14-06-011 |
Keywords: | architecture, |
Posted-Date: | 20 Jun 2014 21:17:31 EDT |
On 6/18/2014 1:44 AM, George Neuner wrote:
> On Mon, 16 Jun 2014 12:11:32 +0100, "Derek M. Jones"
> <derek@_NOSPAM_knosof.co.uk> wrote:
>
>> A surprising percentage of power is consumed when a signal changes from
>> 0 to 1 or from 1 to 0. So the idea is to arrange instruction order to
>> minimise the number of transitions at each bit position in an
>> instruction sequence.
>>
>> [CMOS only uses power when it switches, so I'd think approximately all
>> of the power would be consumed when a signal changes. The idea of
>> Gray coded instruction streams is weirdly appealing. -John]
>
> I'm mostly a software person, so this may be way off base.
> However ...
>
> For Gray coding instructions to be effective, most instructions would
> have to offer multiple choices for their encoding. Certainly for the
> most commonly used instructions and probably there would need to be at
> least several choices of encoding.
>
> ISTM then that the decoder becomes n-ways wider at each decision step,
> and somewhat deeper [though maybe not twice] due to requiring
> additional combining/filtering steps. So fetch to execute latency
> would suffer and materially affect [already problematic] branching
> performance.
True if you are parsing instruction-by-instruction, but there are ways
around that, see millcomputing.com/docs/encoding. In a wide-issue
machine (where the instruction contains several operations to issue in
parallel, like a VLIW, or any of several other less common schemes), it
is possible to group the individual operations by natural encoding size.
The problem then reduces to isolating the blocks, and the cost of that
is amortised over the contained operations rather than being repeated
for each. This is not frequency-based Gray coding, but nevertheless
produces a much more compact encoding that the usual while at the same
time being cheap to parse; experiment (unpublished) shows the encoding
to be within a bit of Gray code
> Moreover, I would think that to make this work best you'd also need to
> use fixed sized instructions externally so that there is no additional
> pre-decode penalties for locating instruction boundaries, aligning
> bits, needing extra fetches for instructions that span fetch lines,
> etc. You want to be able to just shift and drop each instruction
> directly into the decoder. But fixed size instructions would put
> additional fetch pressures on the memory system.
The citation above addresses these issues too.
Return to the
comp.compilers page.
Search the
comp.compilers archives again.