Related articles |
---|
Hash specifics degroot@cpsc.ucalgary.ca (1990-12-11) |
Re: Hash specifics johnl@iecc.cambridge.ma.us (1990-12-11) |
Re: Hash specifics chased@Eng.Sun.COM (1990-12-11) |
Re: Hash specifics pardo@cs.washington.edu (1990-12-11) |
Re: Hash specifics henry@zoo.toronto.edu (1990-12-12) |
Re: Hash specifics baxter@zola.ICS.UCI.EDU (Ira Baxter) (1990-12-13) |
Re: Hash specifics lupine!rfg@uunet.UU.NET (1990-12-13) |
Re: Hash specifics rfg@ncd.com (1990-12-13) |
Re: Hash specifics rfg@ncd.com (1990-12-13) |
Re: Hash specifics bart@cs.uoregon.edu (1990-12-13) |
Re: Hash specifics roth@resi.rice.edu (1990-12-14) |
[7 later articles] |
Newsgroups: | comp.compilers |
From: | henry@zoo.toronto.edu (Henry Spencer) |
Keywords: | design |
Organization: | U of Toronto Zoology |
References: | <9012111512.AA14873@iecc.cambridge.ma.us> |
Date: | Wed, 12 Dec 90 22:14:25 GMT |
>>How does one build a hash function with minimal table size and minimal
>>collisions for a known set of hash strings?
>
>The problem of designing a hash algorithm that maps N identifiers into
>unique slots in an N entry table is known as perfect hashing...
Note that he didn't ask for zero waste space or zero collisions, just for
minimal both. Interpreting "minimal" not too strictly, this doesn't require
perfect hashing. I don't have any specific algorithms to suggest for
imperfect hashing, however -- it's never seemed difficult to get a table
with only one or two collisions and not much extra space.
Actually, I have never figured out why people seem so obsessed with perfect
hashing, since the hash functions typically seem noticeably expensive, and
it has always seemed to me that a fast hashing function plus fast resolution
of occasional collisions would be easier and just as good.
--
Henry Spencer at U of Toronto Zoology, henry@zoo.toronto.edu utzoo!henry
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.