updated hash functions for postgresql v1

classic Classic list List threaded Threaded
24 messages Options
12
Reply | Threaded
Open this post in threaded view
|

updated hash functions for postgresql v1

Kenneth Marshall-3
Dear PostgreSQL Developers,

This patch is a "diff -c" against the hashfunc.c from postgresql-8.3beta1.
It implements the 2006 version of the hash function by Bob Jenkins. Its
features include a better and faster hash function. I have included the
versions supporting big-endian and little-endian machines that will be
selected based on the machine configuration. Currently, I have hash_any()
just a stub calling hashlittle and hashbig. In order to allow the hash
index to support large indexes (>10^9 entries), the hash function needs
to be able to provide 64-bit hashes.

The functions hashbig2/hashlittle2 produce 2 32-bit hashes that can be
used as a 64-bit hash value. I would like some feedback as to how best
to include 64-bit hashes within our current 32-bit hash infrastructure.
The hash-merge can simple use one of the 2 32-bit pieces to provide
the current 32-bit hash values needed. Then they could be pulled directly
from the hash index and not need to be recalculated at run time. What
would be the best way to implement this in a way that will work on
machines without support for 64-bit integers?

The current patch passes all the regression tests, but has a few warnings
for the different variations of the new hash function. Until the design
has crystalized, I am not going to worry about them and I want testers to
have access to the different functions. I am doing the initial patches
to the hash index code based on a 32-bit hash, but I would like to add the
64-bit hash support pretty early in the development cycle in order to
allow for better testing. Any thoughts would be welcome.

Regards,
Ken


---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

new_hashfunc.patch (34K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Simon Riggs
On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> Its features include a better and faster hash function.

Looks very promising. Do you have any performance test results to show
it really is faster, when compiled into Postgres? Better probably needs
some definition also; in what way are the hash functions better?
 
--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [hidden email] so that your
       message can get through to the mailing list cleanly
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Kenneth Marshall-3
On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:

> On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > Its features include a better and faster hash function.
>
> Looks very promising. Do you have any performance test results to show
> it really is faster, when compiled into Postgres? Better probably needs
> some definition also; in what way are the hash functions better?
>  
> --
>   Simon Riggs
>   2ndQuadrant  http://www.2ndQuadrant.com
>
The new hash function is roughly twice as fast as the old function in
terms of straight CPU time. It uses the same design as the current
hash but provides code paths for aligned and unaligned access as well
as separate mixing functions for different blocks in the hash run
instead of having one general purpose block. I think the speed will
not be an obvious win with smaller items, but will be very important
when hashing larger items (up to 32kb).

Better in this case means that the new hash mixes more thoroughly
which results in less collisions and more even bucket distribution.
There is also a 64-bit varient which is still faster since it can
take advantage of the 64-bit processor instruction set.

Ken

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Simon Riggs
On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:

> On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > Its features include a better and faster hash function.
> >
> > Looks very promising. Do you have any performance test results to show
> > it really is faster, when compiled into Postgres? Better probably needs
> > some definition also; in what way are the hash functions better?
> >  
> > --
> >   Simon Riggs
> >   2ndQuadrant  http://www.2ndQuadrant.com
> >
> The new hash function is roughly twice as fast as the old function in
> terms of straight CPU time. It uses the same design as the current
> hash but provides code paths for aligned and unaligned access as well
> as separate mixing functions for different blocks in the hash run
> instead of having one general purpose block. I think the speed will
> not be an obvious win with smaller items, but will be very important
> when hashing larger items (up to 32kb).
>
> Better in this case means that the new hash mixes more thoroughly
> which results in less collisions and more even bucket distribution.
> There is also a 64-bit varient which is still faster since it can
> take advantage of the 64-bit processor instruction set.

Ken, I was really looking for some tests that show both of the above
were true. We've had some trouble proving the claims of other algorithms
before, so I'm less inclined to take those things at face value.

I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
Others may have different concerns.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Luke Lonergan
In reply to this post by Kenneth Marshall-3
Re: [PATCHES] updated hash functions for postgresql v1

We just applied this and saw a 5 percent speedup on a hash aggregation query with four colums in a 'group by' clause run against a single TPC-H table (lineitem).

CK - can you post the query?

- Luke

Msg is shrt cuz m on ma treo

 -----Original Message-----
From:   Simon Riggs [[hidden email]]
Sent:   Sunday, October 28, 2007 04:11 PM Eastern Standard Time
To:     Kenneth Marshall
Cc:     [hidden email]; [hidden email]; [hidden email]
Subject:        Re: [PATCHES] updated hash functions for postgresql v1

On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > Its features include a better and faster hash function.
> >
> > Looks very promising. Do you have any performance test results to show
> > it really is faster, when compiled into Postgres? Better probably needs
> > some definition also; in what way are the hash functions better?
> > 
> > --
> >   Simon Riggs
> >   2ndQuadrant  http://www.2ndQuadrant.com
> >
> The new hash function is roughly twice as fast as the old function in
> terms of straight CPU time. It uses the same design as the current
> hash but provides code paths for aligned and unaligned access as well
> as separate mixing functions for different blocks in the hash run
> instead of having one general purpose block. I think the speed will
> not be an obvious win with smaller items, but will be very important
> when hashing larger items (up to 32kb).
>
> Better in this case means that the new hash mixes more thoroughly
> which results in less collisions and more even bucket distribution.
> There is also a 64-bit varient which is still faster since it can
> take advantage of the 64-bit processor instruction set.

Ken, I was really looking for some tests that show both of the above
were true. We've had some trouble proving the claims of other algorithms
before, so I'm less inclined to take those things at face value.

I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
Others may have different concerns.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate

Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

CK Tan
Hi, this query on TPCH 1G data gets about 5% improvement.

select count (*) from (select l_orderkey, l_partkey, l_comment,
count(l_tax) from lineitem group by 1, 2, 3) tmpt;

Regards,
-cktan


On Oct 28, 2007, at 1:17 PM, Luke Lonergan wrote:

We just applied this and saw a 5 percent speedup on a hash aggregation query with four colums in a 'group by' clause run against a single TPC-H table (lineitem).

CK - can you post the query?

- Luke

Msg is shrt cuz m on ma treo

 -----Original Message-----
From:   Simon Riggs [[hidden email]]
Sent:   Sunday, October 28, 2007 04:11 PM Eastern Standard Time
To:     Kenneth Marshall
Cc:     [hidden email]; [hidden email]; [hidden email]
Subject:        Re: [PATCHES] updated hash functions for postgresql v1

On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > Its features include a better and faster hash function.
> >
> > Looks very promising. Do you have any performance test results to show
> > it really is faster, when compiled into Postgres? Better probably needs
> > some definition also; in what way are the hash functions better?
> > 
> > --
> >   Simon Riggs
> >   2ndQuadrant  http://www.2ndQuadrant.com
> >
> The new hash function is roughly twice as fast as the old function in
> terms of straight CPU time. It uses the same design as the current
> hash but provides code paths for aligned and unaligned access as well
> as separate mixing functions for different blocks in the hash run
> instead of having one general purpose block. I think the speed will
> not be an obvious win with smaller items, but will be very important
> when hashing larger items (up to 32kb).
>
> Better in this case means that the new hash mixes more thoroughly
> which results in less collisions and more even bucket distribution.
> There is also a 64-bit varient which is still faster since it can
> take advantage of the 64-bit processor instruction set.

Ken, I was really looking for some tests that show both of the above
were true. We've had some trouble proving the claims of other algorithms
before, so I'm less inclined to take those things at face value.

I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
Others may have different concerns.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate


Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Simon Riggs
On Sun, 2007-10-28 at 13:19 -0700, CK Tan wrote:
> Hi, this query on TPCH 1G data gets about 5% improvement.

> select count (*) from (select l_orderkey, l_partkey, l_comment,
> count(l_tax) from lineitem group by 1, 2, 3) tmpt;

> On Oct 28, 2007, at 1:17 PM, Luke Lonergan wrote:
>
> > We just applied this and saw a 5 percent speedup on a hash
> > aggregation query with four colums in a 'group by' clause run
> > against a single TPC-H table (lineitem).
> >
> > CK - can you post the query?

Is this on Postgres or Greenplum?


That looks like quite a wide set of columns.

Sounds good though. Can we get any more measurements in?

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Luke Lonergan
In reply to this post by Kenneth Marshall-3
Re: [PATCHES] updated hash functions for postgresql v1

That's on Greenplum latest.

We used this query to expose CPU heavy aggregation.

The 1GB overall TPCH size is chosen to fit into the RAM of a typical workstation/laptop with 2GB of RAM.  That ensures the time is spent in the CPU processing of the hashagg, which is what we'd like to measure here.

The PG performance will be different, but the measurement approach should be the same IMO.  The only suggestion to make it easier is to use 250MB scale factor, as we use four cores against 1GB.  The principal is the same.

- Luke

Msg is shrt cuz m on ma treo

 -----Original Message-----
From:   Simon Riggs [[hidden email]]
Sent:   Sunday, October 28, 2007 04:48 PM Eastern Standard Time
To:     CK.Tan
Cc:     Luke Lonergan; Kenneth Marshall; [hidden email]; [hidden email]; [hidden email]
Subject:        Re: [PATCHES] updated hash functions for postgresql v1

On Sun, 2007-10-28 at 13:19 -0700, CK Tan wrote:
> Hi, this query on TPCH 1G data gets about 5% improvement.

> select count (*) from (select l_orderkey, l_partkey, l_comment,
> count(l_tax) from lineitem group by 1, 2, 3) tmpt;

> On Oct 28, 2007, at 1:17 PM, Luke Lonergan wrote:
>
> > We just applied this and saw a 5 percent speedup on a hash
> > aggregation query with four colums in a 'group by' clause run
> > against a single TPC-H table (lineitem).
> >
> > CK - can you post the query?

Is this on Postgres or Greenplum?


That looks like quite a wide set of columns.

Sounds good though. Can we get any more measurements in?

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com

Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Kenneth Marshall-3
In reply to this post by Simon Riggs
On Sun, Oct 28, 2007 at 08:06:58PM +0000, Simon Riggs wrote:

> On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> > On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > > Its features include a better and faster hash function.
> > >
> > > Looks very promising. Do you have any performance test results to show
> > > it really is faster, when compiled into Postgres? Better probably needs
> > > some definition also; in what way are the hash functions better?
> > >  
> > > --
> > >   Simon Riggs
> > >   2ndQuadrant  http://www.2ndQuadrant.com
> > >
> > The new hash function is roughly twice as fast as the old function in
> > terms of straight CPU time. It uses the same design as the current
> > hash but provides code paths for aligned and unaligned access as well
> > as separate mixing functions for different blocks in the hash run
> > instead of having one general purpose block. I think the speed will
> > not be an obvious win with smaller items, but will be very important
> > when hashing larger items (up to 32kb).
> >
> > Better in this case means that the new hash mixes more thoroughly
> > which results in less collisions and more even bucket distribution.
> > There is also a 64-bit varient which is still faster since it can
> > take advantage of the 64-bit processor instruction set.
>
> Ken, I was really looking for some tests that show both of the above
> were true. We've had some trouble proving the claims of other algorithms
> before, so I'm less inclined to take those things at face value.
>
> I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
> Others may have different concerns.
>

Simon,

I agree, that we should not take claims withoug testing them ourselves.
My main motivation for posting the patch was to get feedback on how to
add support for 64-bit hashes that will work with all of our supported
platforms. I am trying to avoid the "work on a feature in isolation...
and submit a giant patch with many problems" problem. I intend to do
more extensive testing, but I am trying to reach a basic implementation
level before I start the testing. I am pretty good with theory, but my
coding skills are out of practice. It will take me longer to generate
the tests now and without any clear benefit to the hash index implementation.
I am willing to test further, but I would like to have my testing benefit
the hash index implementation and not just the effectiveness and efficiency
of the hashing algorithm.

Regards,
Ken
> --
>   Simon Riggs
>   2ndQuadrant  http://www.2ndQuadrant.com
>
>

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

bob_jenkins
In reply to this post by Kenneth Marshall-3
On Oct 28, 11:05 am, [hidden email] (Kenneth Marshall) wrote:

> On Sun, Oct 28, 2007 at 05:27:38PM +0000, Simon Riggs wrote:
> > On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
> > > Its features include a better and faster hash function.
>
> > Looks very promising. Do you have any performance test results to show
> > it really is faster, when compiled into Postgres? Better probably needs
> > some definition also; in what way are the hash functions better?
>
> > --
> >   Simon Riggs
> >   2ndQuadrant  http://www.2ndQuadrant.com
>
> The new hash function is roughly twice as fast as the old function in
> terms of straight CPU time. It uses the same design as the current
> hash but provides code paths for aligned and unaligned access as well
> as separate mixing functions for different blocks in the hash run
> instead of having one general purpose block. I think the speed will
> not be an obvious win with smaller items, but will be very important
> when hashing larger items (up to 32kb).
>
> Better in this case means that the new hash mixes more thoroughly
> which results in less collisions and more even bucket distribution.
> There is also a 64-bit varient which is still faster since it can
> take advantage of the 64-bit processor instruction set.
>
> Ken
>
> ---------------------------(end of broadcast)---------------------------
> TIP 7: You can help support the PostgreSQL project by donating at
>
>                http://www.postgresql.org/about/donate

I don't make use of 64-bit arithmetic when producing the 64-bit result
in hashlittle2().  Wish I did.  The routine internally produces 3 32-
bit results a b c, the returned 64-bit result is roughly c | (b<<32).


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [hidden email] so that your
       message can get through to the mailing list cleanly
Reply | Threaded
Open this post in threaded view
|

hashlittle(), hashbig(), hashword() and endianness

Alex Vinokur
In reply to this post by Kenneth Marshall-3
On Oct 27, 10:15 pm, [hidden email] (Kenneth Marshall) wrote:
> Dear PostgreSQL Developers,
>
> This patch is a "diff -c" against the hashfunc.c from postgresql-8.3beta1.
> It implements the 2006 version of the hash function by Bob Jenkins. Its
> features include a better and faster hash function. I have included the
> versions supporting big-endian and little-endian machines that will be
> selected based on the machine configuration.
[snip]

I have some question concerning Bob Jenkins' functions
hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
hashbig(uint8_t*, size_t) in lookup3.c.

Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.

1. hashlittle(k1) produces the same value on Little-Endian and Big-
Endian machines.
   Let hashlittle(k1) be == L1.

2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
machines.
   Let hashbig(k1) be == B1.

  L1 != B1


3. hashword((uint32_t*)k1) produces
    * L1 on LittleEndian machine and
    * B1 on BigEndian machine.

---------------------
The question is: is it possible to change hashword() to get
    * L1 on Little-Endian machine and
    * B1 on Big-Endian machine
   ?

Thanks.

Alex Vinokur
     email: alex DOT vinokur AT gmail DOT com
     http://mathforum.org/library/view/10978.html
     http://sourceforge.net/users/alexvn






---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq
Reply | Threaded
Open this post in threaded view
|

Re: hashlittle(), hashbig(), hashword() and endianness

Alex Vinokur
On Nov 15, 10:40 am, Alex Vinokur <[hidden email]>
wrote:
[snip]

> I have some question concerning Bob Jenkins' functions
> hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
> hashbig(uint8_t*, size_t) in lookup3.c.
>
> Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.
>
> 1. hashlittle(k1) produces the same value on Little-Endian and Big-
> Endian machines.
>    Let hashlittle(k1) be == L1.
>
> 2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
> machines.
>    Let hashbig(k1) be == B1.
>
>   L1 != B1
>
> 3. hashword((uint32_t*)k1) produces
>     * L1 on LittleEndian machine and
>     * B1 on BigEndian machine.
>
===================================
> ---------------------
> The question is: is it possible to change hashword() to get
>     * L1 on Little-Endian machine and
>     * B1 on Big-Endian machine
>    ?

Sorry, it should be as follows:

Is it possible to create two new hash functions on basis of
hashword():
   i)  hashword_little () that produces L1 on Little-Endian and Big-
Endian machines;
   ii) hashword_big ()    that produces B1 on Little-Endian and Big-
Endian machines
   ?

====================================

Thanks.

Alex Vinokur
     email: alex DOT vinokur AT gmail DOT com
     http://mathforum.org/library/view/10978.html
     http://sourceforge.net/users/alexvn


---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate
Reply | Threaded
Open this post in threaded view
|

Re: hashlittle(), hashbig(), hashword() and endianness

Heikki Linnakangas-2
Alex Vinokur wrote:

> On Nov 15, 10:40 am, Alex Vinokur <[hidden email]>
> wrote:
> [snip]
>> I have some question concerning Bob Jenkins' functions
>> hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
>> hashbig(uint8_t*, size_t) in lookup3.c.
>>
>> Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.
>>
>> 1. hashlittle(k1) produces the same value on Little-Endian and Big-
>> Endian machines.
>>    Let hashlittle(k1) be == L1.
>>
>> 2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
>> machines.
>>    Let hashbig(k1) be == B1.
>>
>>   L1 != B1
>>
>> 3. hashword((uint32_t*)k1) produces
>>     * L1 on LittleEndian machine and
>>     * B1 on BigEndian machine.
>>
> ===================================
>> ---------------------
>> The question is: is it possible to change hashword() to get
>>     * L1 on Little-Endian machine and
>>     * B1 on Big-Endian machine
>>    ?
>
> Sorry, it should be as follows:
>
> Is it possible to create two new hash functions on basis of
> hashword():
>    i)  hashword_little () that produces L1 on Little-Endian and Big-
> Endian machines;
>    ii) hashword_big ()    that produces B1 on Little-Endian and Big-
> Endian machines
>    ?

Why?

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq
Reply | Threaded
Open this post in threaded view
|

Re: hashlittle(), hashbig(), hashword() and endianness

Alex Vinokur
On Nov 15, 1:23 pm, [hidden email] (Heikki Linnakangas)
wrote:

> Alex Vinokurwrote:
> > On Nov 15, 10:40 am,Alex Vinokur<[hidden email]>
> > wrote:
> > [snip]
> >> I have some question concerning Bob Jenkins' functions
> >> hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
> >> hashbig(uint8_t*, size_t) in lookup3.c.
>
> >> Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.
>
> >> 1. hashlittle(k1) produces the same value on Little-Endian and Big-
> >> Endian machines.
> >>    Let hashlittle(k1) be == L1.
>
> >> 2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
> >> machines.
> >>    Let hashbig(k1) be == B1.
>
> >>   L1 != B1
>
> >> 3. hashword((uint32_t*)k1) produces
> >>     * L1 on LittleEndian machine and
> >>     * B1 on BigEndian machine.
>
> > ===================================
> >> ---------------------
> >> The question is: is it possible to change hashword() to get
> >>     * L1 on Little-Endian machine and
> >>     * B1 on Big-Endian machine
> >>    ?
>
> > Sorry, it should be as follows:
>
> > Is it possible to create two new hash functions on basis of
> > hashword():
> >    i)  hashword_little () that produces L1 on Little-Endian and Big-
> > Endian machines;
> >    ii) hashword_big ()    that produces B1 on Little-Endian and Big-
> > Endian machines
> >    ?
>
> Why?
>
[snip]

Suppose:
uint8_t chBuf[SIZE32 * 4];  // ((size_t)&chBuf[0] & 3) == 0

Function
hashlittle(chBuf, SIZE32 * 4, 0)
produces the same hashValue (let this value be L1) on little-endian
and big-endian machines. So, hashlittle() is endianness-indepent.

On other hand, function
hashword ((uint32_t)chBuf, SIZE32, 0)
produces hashValue == L1 on little-endian machine and hashValue != L1
on big-endian machine. So, hashword() is endianness-dependent.

I would like to use both hashlittle() and hashword() (or
hashword_little) on little-endian and big-endian machine and to get
identical hashValues.


Alex Vinokur
     email: alex DOT vinokur AT gmail DOT com
     http://mathforum.org/library/view/10978.html
     http://sourceforge.net/users/alexvn




---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [hidden email] so that your
       message can get through to the mailing list cleanly
Reply | Threaded
Open this post in threaded view
|

Re: hashlittle(), hashbig(), hashword() and endianness

Kenneth Marshall-3
On Fri, Nov 16, 2007 at 01:19:13AM -0800, Alex Vinokur wrote:

> On Nov 15, 1:23 pm, [hidden email] (Heikki Linnakangas)
> wrote:
> > Alex Vinokurwrote:
> > > On Nov 15, 10:40 am,Alex Vinokur<[hidden email]>
> > > wrote:
> > > [snip]
> > >> I have some question concerning Bob Jenkins' functions
> > >> hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
> > >> hashbig(uint8_t*, size_t) in lookup3.c.
> >
> > >> Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.
> >
> > >> 1. hashlittle(k1) produces the same value on Little-Endian and Big-
> > >> Endian machines.
> > >>    Let hashlittle(k1) be == L1.
> >
> > >> 2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
> > >> machines.
> > >>    Let hashbig(k1) be == B1.
> >
> > >>   L1 != B1
> >
> > >> 3. hashword((uint32_t*)k1) produces
> > >>     * L1 on LittleEndian machine and
> > >>     * B1 on BigEndian machine.
> >
> > > ===================================
> > >> ---------------------
> > >> The question is: is it possible to change hashword() to get
> > >>     * L1 on Little-Endian machine and
> > >>     * B1 on Big-Endian machine
> > >>    ?
> >
> > > Sorry, it should be as follows:
> >
> > > Is it possible to create two new hash functions on basis of
> > > hashword():
> > >    i)  hashword_little () that produces L1 on Little-Endian and Big-
> > > Endian machines;
> > >    ii) hashword_big ()    that produces B1 on Little-Endian and Big-
> > > Endian machines
> > >    ?
> >
> > Why?
> >
> [snip]
>
> Suppose:
> uint8_t chBuf[SIZE32 * 4];  // ((size_t)&chBuf[0] & 3) == 0
>
> Function
> hashlittle(chBuf, SIZE32 * 4, 0)
> produces the same hashValue (let this value be L1) on little-endian
> and big-endian machines. So, hashlittle() is endianness-indepent.
>
> On other hand, function
> hashword ((uint32_t)chBuf, SIZE32, 0)
> produces hashValue == L1 on little-endian machine and hashValue != L1
> on big-endian machine. So, hashword() is endianness-dependent.
>
> I would like to use both hashlittle() and hashword() (or
> hashword_little) on little-endian and big-endian machine and to get
> identical hashValues.
>
>
> Alex Vinokur
>      email: alex DOT vinokur AT gmail DOT com
>      http://mathforum.org/library/view/10978.html
>      http://sourceforge.net/users/alexvn
>
>
Alex,

As I suspected, you want a hash function that is independent of the
machine endian-ness. You will need to design, develop, and test such
a function yourself. As you start to look at how overflow, rot's, and
shifts are handled at the boundaries you may find it difficult to
get a fast hash function with those properties. Good luck.

Regards,
Ken

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
Reply | Threaded
Open this post in threaded view
|

Re: hashlittle(), hashbig(), hashword() and endianness

Marko Kreen-3
In reply to this post by Alex Vinokur
On 11/16/07, Alex Vinokur <[hidden email]> wrote:
> I would like to use both hashlittle() and hashword() (or
> hashword_little) on little-endian and big-endian machine and to get
> identical hashValues.

Whats wrong with hashlittle()?  It does use the same optimized
reading on LE platform that hashword() does.  Or you could wrap
the read values with some int2le() macro that is NOP on LE cpu.
Although I suspect the performance wont be better than using
hashlittle() directly.

--
marko

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [hidden email] so that your
       message can get through to the mailing list cleanly
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Tom Lane-2
In reply to this post by Kenneth Marshall-3
Kenneth Marshall <[hidden email]> writes:
> Dear PostgreSQL Developers,
> This patch is a "diff -c" against the hashfunc.c from postgresql-8.3beta1.

It's pretty obvious that this patch hasn't even been tested on a
big-endian machine:

> + #ifndef WORS_BIGENDIAN

However, why do we need two code paths anyway?  I don't think there's
any requirement for the hash values to come out the same on little-
and big-endian machines.  In common cases the byte-array data being
presented to the hash function would be different to start with, so
you could hardly expect identical hash results even if you had separate
code paths.

I don't find anything very compelling about 64-bit hashing, either.
We couldn't move to that without breaking API for hash functions
of user-defined types.  Given all the other problems with hash
indexes, the issue of whether it's useful to have more than 2^32
hash buckets seems very far off indeed.

                        regards, tom lane

--
Sent via pgsql-patches mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Kenneth Marshall-3
On Sun, Mar 16, 2008 at 10:53:02PM -0400, Tom Lane wrote:

> Kenneth Marshall <[hidden email]> writes:
> > Dear PostgreSQL Developers,
> > This patch is a "diff -c" against the hashfunc.c from postgresql-8.3beta1.
>
> It's pretty obvious that this patch hasn't even been tested on a
> big-endian machine:
>
> > + #ifndef WORS_BIGENDIAN
>
> However, why do we need two code paths anyway?  I don't think there's
> any requirement for the hash values to come out the same on little-
> and big-endian machines.  In common cases the byte-array data being
> presented to the hash function would be different to start with, so
> you could hardly expect identical hash results even if you had separate
> code paths.
>
> I don't find anything very compelling about 64-bit hashing, either.
> We couldn't move to that without breaking API for hash functions
> of user-defined types.  Given all the other problems with hash
> indexes, the issue of whether it's useful to have more than 2^32
> hash buckets seems very far off indeed.
>
> regards, tom lane
>

Yes, there is that typo but it has, in fact, been tested on big and
little-endian machines. Since, it was a simple update to replace the
current hash function used by PostgreSQL with the new version from
Bob Jenkins. The test for the endian-ness of the system allows for
the code paths to be optimized for the particular CPU. The 64-bit
hashing was included for use during my work on on the hash index.
Part of that will entail testing the performance of various
permutations of previously submitted suggestions.

Regards,
Ken Marshall

--
Sent via pgsql-patches mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Tom Lane-2
In reply to this post by Simon Riggs
Simon Riggs <[hidden email]> writes:

> On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
>> The new hash function is roughly twice as fast as the old function in
>> terms of straight CPU time. It uses the same design as the current
>> hash but provides code paths for aligned and unaligned access as well
>> as separate mixing functions for different blocks in the hash run
>> instead of having one general purpose block. I think the speed will
>> not be an obvious win with smaller items, but will be very important
>> when hashing larger items (up to 32kb).
>>
>> Better in this case means that the new hash mixes more thoroughly
>> which results in less collisions and more even bucket distribution.
>> There is also a 64-bit varient which is still faster since it can
>> take advantage of the 64-bit processor instruction set.

> Ken, I was really looking for some tests that show both of the above
> were true. We've had some trouble proving the claims of other algorithms
> before, so I'm less inclined to take those things at face value.

I spent some time today looking at this code more closely and running
some simple speed tests.  It is faster than what we have, although 2X
is the upper limit of the speedups I saw on four different machines.
There are several things going on in comparison to our existing
hash_any:

* If the source data is word-aligned, the new code fetches it a word at
a time instead of a byte at a time; that is

        a += (k[0] + ((uint32) k[1] << 8) + ((uint32) k[2] << 16) + ((uint32) k[3] << 24));
        b += (k[4] + ((uint32) k[5] << 8) + ((uint32) k[6] << 16) + ((uint32) k[7] << 24));
        c += (k[8] + ((uint32) k[9] << 8) + ((uint32) k[10] << 16) + ((uint32) k[11] << 24));

becomes

        a += k[0];
        b += k[1];
        c += k[2];

where k is now pointer to uint32 instead of uchar.  This accounts for
most of the speed improvement.  However, the results now vary between
big-endian and little-endian machines.  That's fine for PG's purposes.
But it means that we need two sets of code for the unaligned-input code
path, since it clearly won't do for the same bytestring to get two
different hashes depending on whether it happens to be presented aligned
or not.  The presented patch actually offers *four* code paths, so that
you can compute either little-endian-ish or big-endian-ish hashes on
either type of machine.  That's nothing but bloat for our purposes, and
should be reduced to the minimum.

* Given a word-aligned source pointer and a length that isn't a multiple
of 4, the new code fetches the last partial word as a full word fetch
and masks it off, as per the code comment:

     * "k[2]&0xffffff" actually reads beyond the end of the string, but
     * then masks off the part it's not allowed to read.  Because the
     * string is aligned, the masked-off tail is in the same word as the
     * rest of the string.  Every machine with memory protection I've seen
     * does it on word boundaries, so is OK with this.  But VALGRIND will
     * still catch it and complain.  The masking trick does make the hash
     * noticably faster for short strings (like English words).

This I think is well beyond the bounds of sanity, especially since we
have no configure support for setting #ifdef VALGRIND.  I'd lose the
"non valgrind clean" paths (which again are contributing to the patch's
impression of bloat/redundancy).

* Independently of the above changes, the actual hash calculation
(the mix() and final() macros) has been changed.  Ken claims that
this made the hash "better", but I'm deeply suspicious of that.
The comments in the code make it look like Jenkins actually sacrificed
hash quality in order to get a little more speed.  I don't think we
should adopt those changes unless some actual evidence is presented
that the hash is better and not merely faster.


In short: I think we should adopt the changes to use aligned word
fetches where possible, but not adopt the mix/final changes unless
more evidence is presented.

Lastly, the patch adds yet more code to provide the option of computing
a 64-bit hash rather than 32.  (AFAICS, the claim that this part is
optimized for 64-bit machines is mere fantasy.  It's simply Yet Another
duplicate of the identical code, but it gives you back two of its three
words of internal state at the end, instead of only one.)  As I said
before, this is just bloat for us.  I've got zero interest in pursuing
64-bit hashing when we still don't have a hash index implementation that
anyone would consider using in anger.  Let's see if we can make the cake
edible before worrying about putting a better grade of icing on it.

                        regards, tom lane

--
Sent via pgsql-patches mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches
Reply | Threaded
Open this post in threaded view
|

Re: updated hash functions for postgresql v1

Kenneth Marshall-3
On Sat, Apr 05, 2008 at 03:40:35PM -0400, Tom Lane wrote:

> Simon Riggs <[hidden email]> writes:
> > On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
> >> The new hash function is roughly twice as fast as the old function in
> >> terms of straight CPU time. It uses the same design as the current
> >> hash but provides code paths for aligned and unaligned access as well
> >> as separate mixing functions for different blocks in the hash run
> >> instead of having one general purpose block. I think the speed will
> >> not be an obvious win with smaller items, but will be very important
> >> when hashing larger items (up to 32kb).
> >>
> >> Better in this case means that the new hash mixes more thoroughly
> >> which results in less collisions and more even bucket distribution.
> >> There is also a 64-bit varient which is still faster since it can
> >> take advantage of the 64-bit processor instruction set.
>
> > Ken, I was really looking for some tests that show both of the above
> > were true. We've had some trouble proving the claims of other algorithms
> > before, so I'm less inclined to take those things at face value.
>
> I spent some time today looking at this code more closely and running
> some simple speed tests.  It is faster than what we have, although 2X
> is the upper limit of the speedups I saw on four different machines.
> There are several things going on in comparison to our existing
> hash_any:
>
> * If the source data is word-aligned, the new code fetches it a word at
> a time instead of a byte at a time; that is
>
>         a += (k[0] + ((uint32) k[1] << 8) + ((uint32) k[2] << 16) + ((uint32) k[3] << 24));
>         b += (k[4] + ((uint32) k[5] << 8) + ((uint32) k[6] << 16) + ((uint32) k[7] << 24));
>         c += (k[8] + ((uint32) k[9] << 8) + ((uint32) k[10] << 16) + ((uint32) k[11] << 24));
>
> becomes
>
>         a += k[0];
>         b += k[1];
>         c += k[2];
>
> where k is now pointer to uint32 instead of uchar.  This accounts for
> most of the speed improvement.  However, the results now vary between
> big-endian and little-endian machines.  That's fine for PG's purposes.
> But it means that we need two sets of code for the unaligned-input code
> path, since it clearly won't do for the same bytestring to get two
> different hashes depending on whether it happens to be presented aligned
> or not.  The presented patch actually offers *four* code paths, so that
> you can compute either little-endian-ish or big-endian-ish hashes on
> either type of machine.  That's nothing but bloat for our purposes, and
> should be reduced to the minimum.
>

I agree that a good portion of the speed up is due to the full word
processing. The original code from Bob Jenkins had all of these code
paths and I just dropped them in with a minimum of changes.

> * Given a word-aligned source pointer and a length that isn't a multiple
> of 4, the new code fetches the last partial word as a full word fetch
> and masks it off, as per the code comment:
>
>      * "k[2]&0xffffff" actually reads beyond the end of the string, but
>      * then masks off the part it's not allowed to read.  Because the
>      * string is aligned, the masked-off tail is in the same word as the
>      * rest of the string.  Every machine with memory protection I've seen
>      * does it on word boundaries, so is OK with this.  But VALGRIND will
>      * still catch it and complain.  The masking trick does make the hash
>      * noticably faster for short strings (like English words).
>
> This I think is well beyond the bounds of sanity, especially since we
> have no configure support for setting #ifdef VALGRIND.  I'd lose the
> "non valgrind clean" paths (which again are contributing to the patch's
> impression of bloat/redundancy).
>

Okay, I will strip the VALGRIND paths. I did not see a real need for them
either.

> * Independently of the above changes, the actual hash calculation
> (the mix() and final() macros) has been changed.  Ken claims that
> this made the hash "better", but I'm deeply suspicious of that.
> The comments in the code make it look like Jenkins actually sacrificed
> hash quality in order to get a little more speed.  I don't think we
> should adopt those changes unless some actual evidence is presented
> that the hash is better and not merely faster.
>

I was repeating the claims made by the functions author after his own
testing. His analysis and tests were reasonable, but I do agree that
we need some testing of our own. I will start pulling some test cases
together like what was discussed earlier with Simon.

>
> In short: I think we should adopt the changes to use aligned word
> fetches where possible, but not adopt the mix/final changes unless
> more evidence is presented.
>
Okay, I agree and will work on producing evidence either way.

> Lastly, the patch adds yet more code to provide the option of computing
> a 64-bit hash rather than 32.  (AFAICS, the claim that this part is
> optimized for 64-bit machines is mere fantasy.  It's simply Yet Another
> duplicate of the identical code, but it gives you back two of its three
> words of internal state at the end, instead of only one.)  As I said
> before, this is just bloat for us.  I've got zero interest in pursuing
> 64-bit hashing when we still don't have a hash index implementation that
> anyone would consider using in anger.  Let's see if we can make the cake
> edible before worrying about putting a better grade of icing on it.
>
You are correct, my 64-bit claim was due to mis-interpreting some comments
by the author. He sent in a correction to the mailing list, personally.

Regards,
Ken Marshall

--
Sent via pgsql-patches mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches
12