Re: [HACKERS] PseudoPartitioning and agregates

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: [HACKERS] PseudoPartitioning and agregates

Tom Lane-2
Sokolov Yura <[hidden email]> writes:
> I think, postgres can perfoms aggregate on each table in union first,
> and then merge results.

I looked into this because it did not make a lot of sense to me.  The
aggregates you are testing with (count, sum, max) are all perfectly
linear in the number of input values, so it absolutely should not save
any time to divide up and then recombine the input, especially not if
the recombination takes some actual work (like a join).

What I found was that the reason for the difference was the overhead of
SubqueryScan and Append nodes:

> HashAggregate  (cost=23830.52..23832.02 rows=200 width=8) (actual time=22547.272..22586.130 rows=5243 loops=1)
>   ->  Subquery Scan t_union  (cost=0.00..18587.64 rows=524288 width=8) (actual time=0.204..17863.444 rows=524288 loops=1)
>         ->  Append  (cost=0.00..13344.76 rows=524288 width=12) (actual time=0.193..12990.177 rows=524288 loops=1)
>               ->  Subquery Scan "*SELECT* 1"  (cost=0.00..6684.88 rows=262144 width=12) (actual time=0.186..4488.981 rows=262144 loops=1)
>                     ->  Seq Scan on t  (cost=0.00..4063.44 rows=262144 width=12) (actual time=0.163..1915.213 rows=262144 loops=1)
>               ->  Subquery Scan "*SELECT* 2"  (cost=0.00..6659.88 rows=262144 width=12) (actual time=0.225..4558.788 rows=262144 loops=1)
>                     ->  Seq Scan on t1  (cost=0.00..4038.44 rows=262144 width=12) (actual time=0.208..1798.410 rows=262144 loops=1)
> Total runtime: 22634.454 ms
> (well, actual time is 2375 ms  Postgres 8.0.1 Slackware 10.0)

EXPLAIN ANALYZE overstates the penalty because its per-plan-node
instrumentation overhead is pretty high, but nonetheless it's
clear that the actually useful work (the two seqscans and the
HashAggregate) is only accounting for a portion of the runtime.
The reason your join query wins is that only a much smaller number of
tuples have to pass through multiple levels of plan nodes.

In the above example the Subquery Scan nodes aren't really doing
anything useful at all: they have neither any selection (filter
conditions) nor any projection (the output columns are the same
as the input, though this is not shown by EXPLAIN).  They are put there
by the planner because there are cases where they *are* needed,
eg to do type conversion when UNION'ing unlike column types.
But we could try harder to optimize them out.

I have committed some changes in CVS tip to get rid of useless
Subquery Scan nodes.  Your example now looks like

save=# explain analyze SELECT grp,count(*) AS c,sum(amount) AS s,max(amount) AS m FROM union_table GROUP BY grp;
                                                       QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=24311.52..24313.02 rows=200 width=8) (actual time=15201.186..15261.184 rows=5243 loops=1)
   ->  Append  (cost=0.00..13825.76 rows=524288 width=12) (actual time=0.236..7519.033 rows=524288 loops=1)
         ->  Seq Scan on t1  (cost=0.00..4291.44 rows=262144 width=12) (actual time=0.205..2071.102 rows=262144 loops=1)
         ->  Seq Scan on t2  (cost=0.00..4291.44 rows=262144 width=12) (actual time=0.095..1743.434 rows=262144 loops=1)
 Total runtime: 15292.082 ms
(5 rows)

The Subquery Scans also disappear from your more complex query,
but since they weren't processing nearly as many tuples, there's
not much improvement there:

                                                          QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
 Merge Full Join  (cost=14207.24..14964.22 rows=33081 width=40) (actual time=11339.442..11457.087 rows=5243 loops=1)
   Merge Cond: ("outer".grp = "inner".grp)
   ->  Sort  (cost=7100.32..7106.65 rows=2532 width=20) (actual time=5672.726..5682.937 rows=2622 loops=1)
         Sort Key: t1.grp
         ->  HashAggregate  (cost=6912.88..6931.87 rows=2532 width=8) (actual time=5591.574..5624.523 rows=2622 loops=1)
               ->  Seq Scan on t1  (cost=0.00..4291.44 rows=262144 width=8) (actual time=0.212..1707.122 rows=262144 loops=1)
   ->  Sort  (cost=7106.91..7113.45 rows=2613 width=20) (actual time=5666.598..5676.735 rows=2622 loops=1)
         Sort Key: t2.grp
         ->  HashAggregate  (cost=6912.88..6932.48 rows=2613 width=8) (actual time=5584.098..5616.810 rows=2622 loops=1)
               ->  Seq Scan on t2  (cost=0.00..4291.44 rows=262144 width=8) (actual time=0.139..1707.351 rows=262144 loops=1)
 Total runtime: 11501.805 ms
(11 rows)

The EXPLAIN ANALYZE overhead for the Append is still pretty heavy,
but when comparing actual runtimes for the two queries, they are
now very nearly the same.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

               http://archives.postgresql.org
Reply | Threaded
Open this post in threaded view
|

Re: [HACKERS] PseudoPartitioning and agregates

Greg Stark-3

Tom Lane <[hidden email]> writes:

> The EXPLAIN ANALYZE overhead for the Append is still pretty heavy,
> but when comparing actual runtimes for the two queries, they are
> now very nearly the same.

How hard would it be to have Postgres actually remove the gettimeofday
overhead from the EXPLAIN ANALYZE output?

It seems like it ought to be able to time a couple hundred gettimeofday calls
and get a perfectly usable figure. The actual amount of overhead per call
should be very consistent and it should be easy to keep track of how many
gettimeofday calls were needed.

For queries that don't do much i/o, especially on loaded machines, there could
still be a problem in that the added syscalls would cause most unix schedulers
to behave differently. But at least it would be basically right.

--
greg


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: [HACKERS] PseudoPartitioning and agregates

Tom Lane-2
Greg Stark <[hidden email]> writes:
> How hard would it be to have Postgres actually remove the gettimeofday
> overhead from the EXPLAIN ANALYZE output?

Personally, I dislike measurement tools that lie to you under the flag
of producing more-easily-interpreted results.

As an example of why this would be a bad idea, the total time would no
longer be closely related to the actual elapsed time (as measured by
psql's \timing for instance) so you would be entirely unable to tell
whether there was some significant factor not being measured.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings
Reply | Threaded
Open this post in threaded view
|

Re: [HACKERS] PseudoPartitioning and agregates

Greg Stark-3
Tom Lane <[hidden email]> writes:

> Greg Stark <[hidden email]> writes:
>
> > How hard would it be to have Postgres actually remove the gettimeofday
> > overhead from the EXPLAIN ANALYZE output?
>
> Personally, I dislike measurement tools that lie to you under the flag
> of producing more-easily-interpreted results.

This is pretty standard practice for profilers in other contexts.

> As an example of why this would be a bad idea, the total time would no
> longer be closely related to the actual elapsed time (as measured by
> psql's \timing for instance) so you would be entirely unable to tell
> whether there was some significant factor not being measured.

Well that would be easily remedied by printing the total overhead subtracted
from all the nodes after the plan.

--
greg


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [hidden email])