planner weirdness: a join uses nestloop with checking condition when there are two subplan-or-hashed subqueries

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

planner weirdness: a join uses nestloop with checking condition when there are two subplan-or-hashed subqueries

Alexey Bashtanov-4
Hello,

Planner seems to make a weird choice of join algorithm -- O(M*N)
nestloop -- under certain circumstances.
I'm not exactly sure what is the criteria but I have a self-contained
example, albeit a large one.

So if you unpack the pb-dump.sql.bz2 attached and run the pb-test.sql
you'll have some plan explained.
Despite an index present on "qux" it's accessed using a plain seq-seq
nestloop, see pb-plan.txt

Why? Cardinalities look like predicted reasonably well, and with those
predictions hash join or index scan would
be obviously faster: the planner thinks we are joining 149 and 175728 rows.
The distribution for "qux"."foo_id" is not too skew, the average number
of rows per "foo_id" in "qux" is about 9.
Slight data or query variations make it use the index.
With "set enable_nestloop to off; set enable_mergejoin to off;" the plan
generated is better. It has smaller cost of the final join, though the
costs for the outer relation increase, probably due to the "never
executed" path.

Playing with json/from_collapse_limit does not make any difference.

I can observe this on both master and v 10.11 .

I haven't investigated it any further yet, so for now just asking
whether it's a known behavior?
If not, I'll try to find out what's going on.

Best, Alex

pb-plan.txt (6K) Download Attachment
pb-test.sql (2K) Download Attachment
pb-dump.sql.bz2 (9M) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: planner weirdness: a join uses nestloop with checking condition when there are two subplan-or-hashed subqueries

Tom Lane-2
Alexey Bashtanov <[hidden email]> writes:
> Planner seems to make a weird choice of join algorithm -- O(M*N)
> nestloop -- under certain circumstances.

In the particular case you've got here, the estimated cost of the
"foo" scan is so large that it swamps everything else:

 Nested Loop Left Join  (cost=0.00..53543550.45 rows=1135 width=8) (actual time=714.969..1407.137 rows=1051 loops=1)
   ...
   ->  Seq Scan on foo  (cost=0.00..53045460.77 rows=149 width=4) (actual time=711.364..714.312 rows=21 loops=1)
   ...
   ->  Materialize  (cost=0.00..4100.92 rows=175728 width=8) (actual time=0.003..19.522 rows=175728 loops=21)
         ->  Seq Scan on qux  (cost=0.00..2535.28 rows=175728 width=8) (actual time=0.017..19.754 rows=175728 loops=1)

Yeah, an indexscan on qux would probably be a little cheaper,
but it could not make as much as 1% difference to the total cost,
because the "foo" scan is accounting for more than 99% already.
I don't recall the exact rules, but the planner stops worrying
about cost differences less than 1%, in order to save planning
time by cutting the number of cases considered.  So it's probably
just discarding the indexed-join alternative as not being
meaningfully better.

What seems actually out of line with reality is the cost estimate
for the "foo" scan, which probably is down to the fact that
cost_qual_eval_walker is not very bright about what to do with
the AlternativeSubPlan constructs.  It looks like it's assuming
that the non-hashed alternatives will be chosen, which they aren't
(if they were, this estimate might not be so far out of line).
But we can't just switch it to make the other assumption, because
that would skew the results for other cases.

                        regards, tom lane


Reply | Threaded
Open this post in threaded view
|

Re: planner weirdness: a join uses nestloop with checking condition when there are two subplan-or-hashed subqueries

Alexey Bashtanov-4
Hi Tom,

On 25/02/2020 20:34, Tom Lane wrote:
> cost_qual_eval_walker is not very bright about what to do with
> the AlternativeSubPlan constructs.  It looks like it's assuming
> that the non-hashed alternatives will be chosen, which they aren't
> (if they were, this estimate might not be so far out of line).
> But we can't just switch it to make the other assumption, because
> that would skew the results for other cases.

What do you think of making it take both into account?
That's an approximation of a piecewise linear function by a linear one,
so it cannot be very precise, but I think it's better than the current one.

Best, Alex

alternatives-cost-as-geom-mean-v1.patch (3K) Download Attachment