|Unum numbers DrDiettrich1@netscape.net (Hans-Peter Diettrich) (2018-04-12)|
|Re: Unum numbers Pidgeot18@verizon.invalid (=?UTF-8?Q?Joshua_Cranmer_=f0=9f=90=a7?=) (2018-04-12)|
|Re: Unum numbers DrDiettrich1@netscape.net (Hans-Peter Diettrich) (2018-04-18)|
|Date:||Thu, 12 Apr 2018 23:13:22 -0400|
|Organization:||A noiseless patient Spider|
|Injection-Info:||gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="98169"; mail-complaints-to="email@example.com"|
|Posted-Date:||13 Apr 2018 13:15:08 EDT|
On 4/12/2018 4:29 AM, Hans-Peter Diettrich wrote:
> A friend just pointed me to the Universal Number formats, which may have
> been subject to the IEEE 754 thread. Before I tweet my own thoughts
> here, I'd like to hear more about the practice of Unum numbers. From
> theory the Unum formats try to increase the number precision (around
> 1.0) and range (towards 0 and Inf). What are these attempts worth in
> real life?
There are a few different variants of unum proposals floating around.
The original proposals were variable width, which tend to be a
spectacularly bad idea (The argument, as I understand it, is that the
memory is where you eat the most energy costs, but the reality is that
you're going to have to either deal with padding to get easy address
calculations, store indexes for easy access, or deal with high-latency
traversals, killing your ability to use data-parallel architectures like
GPUs). I understand that the newest proposal returns to fixed-width size.
Beyond that, the only thing I really know about unums is that their
inventor claims them and interval arithmetic to eliminate the need for
numerical analysts, which is contested by one of the leads of the IEEE
754 standard, and I'm not really qualified to comment on this debate.
Beware of bugs in the above code; I have only proved it correct, not
tried it. -- Donald E. Knuth
Return to the
Search the comp.compilers archives again.