function calls optimization

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
16 messages Options
Reply | Threaded
Open this post in threaded view
|

function calls optimization

Andrzej Barszcz-2
Hi

I almost finished patch optimizing non volatile function calls.

select f(t.n) from t where f(t.n) > 10 and f(t.n) < 100;  needs 3 calls of f() for each tuple,
after applying patch only 1.

Any pros and cons  ? 


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andres Freund
Hi,

On October 31, 2019 7:06:13 AM PDT, Andrzej Barszcz <[hidden email]> wrote:

>Hi
>
>I almost finished patch optimizing non volatile function calls.
>
>select f(t.n) from t where f(t.n) > 10 and f(t.n) < 100;  needs 3 calls
>of
>f() for each tuple,
>after applying patch only 1.
>
>Any pros and cons  ?

Depends on the actual way of implementing this proposal. Think we need more details than what you idea here.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Tom Lane-2
Andres Freund <[hidden email]> writes:
> On October 31, 2019 7:06:13 AM PDT, Andrzej Barszcz <[hidden email]> wrote:
>> I almost finished patch optimizing non volatile function calls.
>>
>> select f(t.n) from t where f(t.n) > 10 and f(t.n) < 100;  needs 3 calls
>> of
>> f() for each tuple,
>> after applying patch only 1.
>>
>> Any pros and cons  ?

> Depends on the actual way of implementing this proposal. Think we need more details than what you idea here.

We've typically supposed that the cost of searching for duplicate
subexpressions would outweigh the benefits of sometimes finding them.

                        regards, tom lane


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andres Freund
Hi,

On October 31, 2019 7:45:26 AM PDT, Tom Lane <[hidden email]> wrote:

>Andres Freund <[hidden email]> writes:
>> On October 31, 2019 7:06:13 AM PDT, Andrzej Barszcz
><[hidden email]> wrote:
>>> I almost finished patch optimizing non volatile function calls.
>>>
>>> select f(t.n) from t where f(t.n) > 10 and f(t.n) < 100;  needs 3
>calls
>>> of
>>> f() for each tuple,
>>> after applying patch only 1.
>>>
>>> Any pros and cons  ?
>
>> Depends on the actual way of implementing this proposal. Think we
>need more details than what you idea here.
>
>We've typically supposed that the cost of searching for duplicate
>subexpressions would outweigh the benefits of sometimes finding them.

Based on profiles I've seen I'm not sure that's the right choice. Both for when the calls are expensive (say postgis stuff), and for when a lot of rows are processed.

I think one part of doing this in a realistic manner is an efficient search for redundant expressions. The other, also non trivial, is how to even represent references to the results of expressions in other parts of the expression tree / other expressions.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andres Freund
Hi

On October 31, 2019 7:53:20 AM PDT, Andres Freund <[hidden email]> wrote:

>On October 31, 2019 7:45:26 AM PDT, Tom Lane <[hidden email]> wrote:
>>We've typically supposed that the cost of searching for duplicate
>>subexpressions would outweigh the benefits of sometimes finding them.
>
>Based on profiles I've seen I'm not sure that's the right choice. Both
>for when the calls are expensive (say postgis stuff), and for when a
>lot of rows are processed.
>
>I think one part of doing this in a realistic manner is an efficient
>search for redundant expressions.

One way to improve the situation - which is applicable in a lot of situations, e.g. setrefs.c etc - would be to compute hashes for (sub-) expression trees. Which makes it a lot easier to bail out early when trees are not the same, and also easier to build an efficient way to find redundant expressions. There's some complexity in invalidating such hashes once computed, and when to first compute them, obviously.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andreas Karlsson
In reply to this post by Tom Lane-2
On 10/31/19 3:45 PM, Tom Lane wrote:
> Andres Freund <[hidden email]> writes:
>> On October 31, 2019 7:06:13 AM PDT, Andrzej Barszcz <[hidden email]> wrote:
>>> Any pros and cons  ?
>
>> Depends on the actual way of implementing this proposal. Think we need more details than what you idea here.
>
> We've typically supposed that the cost of searching for duplicate
> subexpressions would outweigh the benefits of sometimes finding them.

That is an important concern, but given how SQL does not make it
convenient to re-use partial results of calculations I think such
queries are quite common in real world workloads.

So if we can make it cheap enough I think that it is a worthwhile
optimization.

Andreas


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Tom Lane-2
In reply to this post by Andres Freund
Andres Freund <[hidden email]> writes:
> On October 31, 2019 7:45:26 AM PDT, Tom Lane <[hidden email]> wrote:
>> We've typically supposed that the cost of searching for duplicate
>> subexpressions would outweigh the benefits of sometimes finding them.

> Based on profiles I've seen I'm not sure that's the right choice. Both for when the calls are expensive (say postgis stuff), and for when a lot of rows are processed.

Yeah, if your mental model of a function call is some remarkably expensive
PostGIS geometry manipulation, it's easy to justify doing a lot of work
to try to detect duplicates.  But most functions in most queries are
more like int4pl or btint8cmp, and it's going to be extremely remarkable
if you can make back the planner costs of checking for duplicate usages
of those.

Possibly this could be finessed by only trying to find duplicates of
functions that have high cost estimates.  Not sure how high is high
enough.

> I think one part of doing this in a realistic manner is an efficient
> search for redundant expressions. The other, also non trivial, is how to
> even represent re eferences to the results of expressions in other parts of the expression tree / other expressions.

Yup, both of those would be critical to do right.

                        regards, tom lane


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andres Freund
Hi,

On October 31, 2019 8:06:50 AM PDT, Tom Lane <[hidden email]> wrote:

>Andres Freund <[hidden email]> writes:
>> On October 31, 2019 7:45:26 AM PDT, Tom Lane <[hidden email]>
>wrote:
>>> We've typically supposed that the cost of searching for duplicate
>>> subexpressions would outweigh the benefits of sometimes finding
>them.
>
>> Based on profiles I've seen I'm not sure that's the right choice.
>Both for when the calls are expensive (say postgis stuff), and for when
>a lot of rows are processed.
>
>Yeah, if your mental model of a function call is some remarkably
>expensive
>PostGIS geometry manipulation, it's easy to justify doing a lot of work
>to try to detect duplicates.  But most functions in most queries are
>more like int4pl or btint8cmp, and it's going to be extremely
>remarkable
>if you can make back the planner costs of checking for duplicate usages
>of those.

Well, if it's an expression containing those individuals cheap calls on a seqscan on a large table below an aggregate, it'd likely still be a win. But we don't, to my knowledge, really have a good way to model optimizations like this that should only be done if either expensive or have a high loop count.

I guess one ugly way to deal with this would be to eliminate redundancies very late, e.g. during setrefs (where a better data structure for matching expressions would be good anyway), as we already know all the costs.

But ideally we would want to do be able to take such savings into account earlier, when considering different paths. I suspect that it might be a good enough vehicle to tackle the rest of the work however, at least initially.

We could also "just" do such elimination during expression "compilation", but it'd be better to not have to do something as complicated as this for every execution of a prepared statement.


>> I think one part of doing this in a realistic manner is an efficient
>> search for redundant expressions. The other, also non trivial, is how
>to
>> even represent re eferences to the results of expressions in other
>parts of the expression tree / other expressions.
>
>Yup, both of those would be critical to do right.

Potentially related note: for nodes like seqscan, combining the qual and projection processing into one expression seems to be a noticable win (at least when taking care do emit two different sets of deform expression steps). Wonder if something like that would take care of avoiding the need for cross expression value passing in enough places.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Tom Lane-2
Andres Freund <[hidden email]> writes:
> Potentially related note: for nodes like seqscan, combining the qual and projection processing into one expression seems to be a noticable win (at least when taking care do emit two different sets of deform expression steps).

There's just one problem: that violates SQL semantics, and not in
a small way.

        select 1/x from tab where x <> 0

                        regards, tom lane


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andres Freund
Hi,

On October 31, 2019 8:45:26 AM PDT, Tom Lane <[hidden email]> wrote:

>Andres Freund <[hidden email]> writes:
>> Potentially related note: for nodes like seqscan, combining the qual
>and projection processing into one expression seems to be a noticable
>win (at least when taking care do emit two different sets of deform
>expression steps).
>
>There's just one problem: that violates SQL semantics, and not in
>a small way.
>
> select 1/x from tab where x <> 0

The expression would obviously have to return early, before projecting, when not matching the qual? I'm basically just thinking of first executing the steps for the qual, and in the success case execute the projection steps before returning the quals positive result.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andrzej Barszcz-2
In reply to this post by Tom Lane-2
x <> 0 is evaluated first, 1/x only when x <> 0, not ?

czw., 31 paź 2019 o 16:45 Tom Lane <[hidden email]> napisał(a):
Andres Freund <[hidden email]> writes:
> Potentially related note: for nodes like seqscan, combining the qual and projection processing into one expression seems to be a noticable win (at least when taking care do emit two different sets of deform expression steps).

There's just one problem: that violates SQL semantics, and not in
a small way.

        select 1/x from tab where x <> 0

                        regards, tom lane
Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andres Freund
Hi,

On October 31, 2019 8:51:11 AM PDT, Andrzej Barszcz <[hidden email]> wrote:

>x <> 0 is evaluated first, 1/x only when x <> 0, not ?
>
>czw., 31 paź 2019 o 16:45 Tom Lane <[hidden email]> napisał(a):
>
>> Andres Freund <[hidden email]> writes:
>> > Potentially related note: for nodes like seqscan, combining the
>qual and
>> projection processing into one expression seems to be a noticable win
>(at
>> least when taking care do emit two different sets of deform
>expression
>> steps).
>>
>> There's just one problem: that violates SQL semantics, and not in
>> a small way.
>>
>>         select 1/x from tab where x <> 0

On postgres lists the policy is to reply below the quoted bit, and to trim the quote appropriately.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Jim Finnerty
In reply to this post by Tom Lane-2
I recently implemented something closely related to this.  Combining and
migrating expensive STABLE user-defined functions to the FROM clause, where
the function is evaluated as a lateral join (or "cross apply").  I'm
defining expensive as 50x times more expensive than the default function
cost.

For functions that return multiple outputs and where the query uses (...).*
notation, this will, for example, consolidate all of the calls in the *
expansion into a single call.  It also looks in the WHERE clause and HAVING
clause, and combines those references, too.  Currently it requires the
function to be in a top-level AND condition, if it appears in a predicate.

I think I can get permission for contributing it back.  If there's an
interest in it, let me know.



-----
Jim Finnerty, AWS, Amazon Aurora PostgreSQL
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html


Jim Finnerty, AWS, Amazon Aurora PostgreSQL
Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andrzej Barszcz-2
In reply to this post by Andres Freund
Hi

I need advice.
ResetExprContext(econtext) is defined as MemoryContextReset((econtext)->ecxt_per_tuple_memory).
I can register callback in MemoryContext but it is always cleaned on every call to MemoryContextReset().
How to reset some fields of ExprContext ( living in per_query_memory ) when ResetExprContext is called ?

czw., 31 paź 2019 o 16:52 Andres Freund <[hidden email]> napisał(a):
Hi,

On October 31, 2019 8:51:11 AM PDT, Andrzej Barszcz <[hidden email]> wrote:
>x <> 0 is evaluated first, 1/x only when x <> 0, not ?
>
>czw., 31 paź 2019 o 16:45 Tom Lane <[hidden email]> napisał(a):
>
>> Andres Freund <[hidden email]> writes:
>> > Potentially related note: for nodes like seqscan, combining the
>qual and
>> projection processing into one expression seems to be a noticable win
>(at
>> least when taking care do emit two different sets of deform
>expression
>> steps).
>>
>> There's just one problem: that violates SQL semantics, and not in
>> a small way.
>>
>>         select 1/x from tab where x <> 0

On postgres lists the policy is to reply below the quoted bit, and to trim the quote appropriately.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andy Fan
In reply to this post by Tom Lane-2


On Thu, Oct 31, 2019 at 11:07 PM Tom Lane <[hidden email]> wrote:


Possibly this could be finessed by only trying to find duplicates of
functions that have high cost estimates.  Not sure how high is high
enough.
 
can we just add a flag on pg_proc to show if the cost is high or not,  if user are not happy with that,  they can change it by updating the value?  based on that most of the function call cost are low,   this way may be helpful for the searching of duplicate expressions.
Reply | Threaded
Open this post in threaded view
|

Re: function calls optimization

Andrzej Barszcz-2
I think your first thought was good. 
How high ? I think it's a matter of convention, certainly more than default 100.

 

czw., 21 lis 2019 o 02:05 Andy Fan <[hidden email]> napisał(a):


On Thu, Oct 31, 2019 at 11:07 PM Tom Lane <[hidden email]> wrote:


Possibly this could be finessed by only trying to find duplicates of
functions that have high cost estimates.  Not sure how high is high
enough.
 
can we just add a flag on pg_proc to show if the cost is high or not,  if user are not happy with that,  they can change it by updating the value?  based on that most of the function call cost are low,   this way may be helpful for the searching of duplicate expressions.