Quantcast

[POC] Faster processing at Gather node

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[POC] Faster processing at Gather node

rafia.sabih
Hello everybody,

While analysing the performance of TPC-H queries for the newly developed parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed that the time taken by gather node is significant. On investigation, as per the current method it copies each tuple to the shared queue and notifies the receiver. Since, this copying is done in shared queue, a lot of locking and latching overhead is there. 

So, in this POC patch I tried to copy all the tuples in a local queue thus avoiding all the locks and latches. Once, the local queue is filled as per it's capacity, tuples are transferred to the shared queue. Once, all the tuples are transferred the receiver is sent the notification about the same.

With this patch I could see significant improvement in performance for simple queries, 

head:
explain  analyse select * from t where i < 30000000;
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..83225.55 rows=29676454 width=19) (actual time=1.379..35871.235 rows=29999999 loops=1)
   Workers Planned: 64
   Workers Launched: 64
   ->  Parallel Seq Scan on t  (cost=0.00..83225.55 rows=463695 width=19) (actual time=0.125..1415.521 rows=461538 loops=65)
         Filter: (i < 30000000)
         Rows Removed by Filter: 1076923
 Planning time: 0.180 ms
 Execution time: 38503.478 ms
(8 rows)

patch:
 explain  analyse select * from t where i < 30000000;
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..83225.55 rows=29676454 width=19) (actual time=0.980..24499.427 rows=29999999 loops=1)
   Workers Planned: 64
   Workers Launched: 64
   ->  Parallel Seq Scan on t  (cost=0.00..83225.55 rows=463695 width=19) (actual time=0.088..968.406 rows=461538 loops=65)
         Filter: (i < 30000000)
         Rows Removed by Filter: 1076923
 Planning time: 0.158 ms
 Execution time: 27331.849 ms
(8 rows)

head:
 explain  analyse select * from t where i < 40000000;
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..83225.55 rows=39501511 width=19) (actual time=0.890..38438.753 rows=39999999 loops=1)
   Workers Planned: 64
   Workers Launched: 64
   ->  Parallel Seq Scan on t  (cost=0.00..83225.55 rows=617211 width=19) (actual time=0.074..1235.180 rows=615385 loops=65)
         Filter: (i < 40000000)
         Rows Removed by Filter: 923077
 Planning time: 0.113 ms
 Execution time: 41609.855 ms
(8 rows)

patch:
explain  analyse select * from t where i < 40000000;
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..83225.55 rows=39501511 width=19) (actual time=1.085..31806.671 rows=39999999 loops=1)
   Workers Planned: 64
   Workers Launched: 64
   ->  Parallel Seq Scan on t  (cost=0.00..83225.55 rows=617211 width=19) (actual time=0.083..954.342 rows=615385 loops=65)
         Filter: (i < 40000000)
         Rows Removed by Filter: 923077
 Planning time: 0.151 ms
 Execution time: 35341.429 ms
(8 rows)

head:
explain  analyse select * from t where i < 45000000;
                                                           QUERY PLAN                                                           
--------------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..102756.80 rows=44584013 width=19) (actual time=0.563..49156.252 rows=44999999 loops=1)
   Workers Planned: 32
   Workers Launched: 32
   ->  Parallel Seq Scan on t  (cost=0.00..102756.80 rows=1393250 width=19) (actual time=0.069..1905.436 rows=1363636 loops=33)
         Filter: (i < 45000000)
         Rows Removed by Filter: 1666667
 Planning time: 0.106 ms
 Execution time: 52722.476 ms
(8 rows)

patch:
 explain  analyse select * from t where i < 45000000;
                                                           QUERY PLAN                                                           
--------------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..102756.80 rows=44584013 width=19) (actual time=0.545..37501.200 rows=44999999 loops=1)
   Workers Planned: 32
   Workers Launched: 32
   ->  Parallel Seq Scan on t  (cost=0.00..102756.80 rows=1393250 width=19) (actual time=0.068..2165.430 rows=1363636 loops=33)
         Filter: (i < 45000000)
         Rows Removed by Filter: 1666667
 Planning time: 0.087 ms
 Execution time: 41458.969 ms
(8 rows)

The improvement in performance is most when the selectivity is around 20-30%, in which case currently parallelism is not selected.

I am testing the performance impact of this on TPC-H queries, in the meantime would appreciate some feedback on the design, etc.

--
Regards,
Rafia Sabih


--
Sent via pgsql-hackers mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

faster_gather.patch (18K) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [POC] Faster processing at Gather node

Robert Haas
On Fri, May 19, 2017 at 7:55 AM, Rafia Sabih
<[hidden email]> wrote:

> While analysing the performance of TPC-H queries for the newly developed
> parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
> that the time taken by gather node is significant. On investigation, as per
> the current method it copies each tuple to the shared queue and notifies the
> receiver. Since, this copying is done in shared queue, a lot of locking and
> latching overhead is there.
>
> So, in this POC patch I tried to copy all the tuples in a local queue thus
> avoiding all the locks and latches. Once, the local queue is filled as per
> it's capacity, tuples are transferred to the shared queue. Once, all the
> tuples are transferred the receiver is sent the notification about the same.

What if, instead of doing this, we switched the shm_mq stuff to use atomics?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [POC] Faster processing at Gather node

akapila
On Fri, May 19, 2017 at 5:58 PM, Robert Haas <[hidden email]> wrote:

> On Fri, May 19, 2017 at 7:55 AM, Rafia Sabih
> <[hidden email]> wrote:
>> While analysing the performance of TPC-H queries for the newly developed
>> parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
>> that the time taken by gather node is significant. On investigation, as per
>> the current method it copies each tuple to the shared queue and notifies the
>> receiver. Since, this copying is done in shared queue, a lot of locking and
>> latching overhead is there.
>>
>> So, in this POC patch I tried to copy all the tuples in a local queue thus
>> avoiding all the locks and latches. Once, the local queue is filled as per
>> it's capacity, tuples are transferred to the shared queue. Once, all the
>> tuples are transferred the receiver is sent the notification about the same.
>
> What if, instead of doing this, we switched the shm_mq stuff to use atomics?
>

That is one of the very first things we have tried, but it didn't show
us any improvement, probably because sending tuple-by-tuple over
shm_mq is not cheap.  Also, independently, we have tried to reduce the
frequency of SetLatch (used to notify receiver), but that also didn't
result in improving the results. Now, I think one thing that can be
tried is to use atomics in shm_mq and reduce the frequency to notify
receiver, but not sure if that can give us better results than with
this idea. There are a couple of other ideas which has been tried to
improve the speed of Gather like avoiding an extra copy of tuple which
we need to do before sending tuple
(tqueueReceiveSlot->ExecMaterializeSlot) and increasing the size of
tuple queue length, but none of those has shown any noticeable
improvement.  I am aware of all this because I and Dilip were offlist
involved in brainstorming ideas with Rafia to improve the speed of
Gather.  I think it might have been better to show the results of
ideas that didn't work out, but I guess Rafia hasn't shared those with
the intuition that nobody would be interested in hearing what didn't
work out.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


--
Sent via pgsql-hackers mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Previous Thread Next Thread
Loading...