Hi, hackers.
One of our customers complains about slow execution of PL/pgSQL functions comparing with Oracle. So he wants to compile PL/pgSQL functions (most likely just-in-time compilation). Certainly interpreter adds quite large overhead comparing with native code (~10 times) but most of PL/pgSQL functions are just running some SQL queues and iterating through results. I can not believe that JIT can significantly speed-up such functions. So I decided to make simple experiment: I created large enough table and implemented functions which calculates norm of one column in different languages. Results are frustrating (at least for me): PL/pgSQL: 29044.361 ms C/SPI: 22785.597 ms С/coreAPI: 2873.072 ms PL/Lua: 33445.520 ms SQL: 7397.639 ms (with parallel execution disabled) The fact that difference between PL/pgSQL and function implemented in C using SPI is not so large was expected by me. But why it is more than 3 time slower than correspondent SQL query? The answer seems to be in the third result: the same function in C implemented without SPI (usign table_beginscan/heap_getnext) Looks like SPI adds quite significant overhead. And as far as almost all PL languages are using SPI, them all suffer from it. Below is profile of SPI function execution: 9.47% postgres libc-2.23.so [.] __memcpy_avx_unaligned 9.19% postgres spitest.so [.] spi_norm 8.09% postgres postgres [.] AllocSetAlloc 4.50% postgres postgres [.] tts_buffer_heap_getsomeattrs 4.36% postgres postgres [.] heap_form_tuple 3.41% postgres postgres [.] standard_ExecutorRun 3.35% postgres postgres [.] ExecScan 3.31% postgres postgres [.] palloc0 2.41% postgres postgres [.] heapgettup_pagemode 2.40% postgres postgres [.] AllocSetReset 2.25% postgres postgres [.] PopActiveSnapshot 2.17% postgres postgres [.] PortalRunFetch 2.16% postgres postgres [.] HeapTupleSatisfiesVisibility 1.97% postgres libc-2.23.so [.] __sigsetjmp 1.90% postgres postgres [.] _SPI_cursor_operation 1.87% postgres postgres [.] AllocSetFree 1.86% postgres postgres [.] PortalRunSelect 1.79% postgres postgres [.] heap_getnextslot 1.75% postgres postgres [.] heap_fill_tuple 1.70% postgres postgres [.] spi_dest_startup 1.50% postgres postgres [.] spi_printtup 1.49% postgres postgres [.] nocachegetattr 1.45% postgres postgres [.] MemoryContextDelete 1.44% postgres postgres [.] ExecJustAssignScanVar 1.38% postgres postgres [.] CreateTupleDescCopy 1.33% postgres postgres [.] SPI_getbinval 1.30% postgres postgres [.] PushActiveSnapshot 1.30% postgres postgres [.] AllocSetContextCreateInternal 1.22% postgres postgres [.] heap_compute_data_size 1.22% postgres postgres [.] MemoryContextCreate 1.14% postgres postgres [.] heapgetpage 1.05% postgres postgres [.] palloc 1.03% postgres postgres [.] SeqNext As you can see, most of the time is spent in allocation and copying memory. I wonder if somebody tried to address this problem and are there some plans for improving speed of PL/pgSQL and other stored languages? I attached to this mail sources of spi_test extension with my experiments. Please build it and run norm.sql file. -- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company |
From: Konstantin Knizhnik [mailto:[hidden email]]
> PL/pgSQL: 29044.361 ms > C/SPI: 22785.597 ms > > The fact that difference between PL/pgSQL and function implemented in C > using SPI is not so large was expected by me. This PL/pgSQL overhead is not so significant compared with the three times, but makes me desire some feature like Oracle's ALTER PROCEDURE ... COMPILE; that compiles the PL/SQL logic to native code. I've seen a few dozen percent speed up. Regards Takayuki Tsunakawa |
In reply to this post by konstantin knizhnik
Hello.
At Wed, 21 Aug 2019 19:41:08 +0300, Konstantin Knizhnik <[hidden email]> wrote in <[hidden email]> > Hi, hackers. > > One of our customers complains about slow execution of PL/pgSQL > functions comparing with Oracle. > So he wants to compile PL/pgSQL functions (most likely just-in-time > compilation). > Certainly interpreter adds quite large overhead comparing with native > code (~10 times) but > most of PL/pgSQL functions are just running some SQL queues and > iterating through results. > > I can not believe that JIT can significantly speed-up such functions. > So I decided to make simple experiment: I created large enough table > and implemented functions > which calculates norm of one column in different languages. > > Results are frustrating (at least for me): > > PL/pgSQL: 29044.361 ms > C/SPI: 22785.597 ms > С/coreAPI: 2873.072 ms > PL/Lua: 33445.520 ms > SQL: 7397.639 ms (with parallel execution disabled) > > The fact that difference between PL/pgSQL and function implemented in > C using SPI is not so large was expected by me. > But why it is more than 3 time slower than correspondent SQL query? > The answer seems to be in the third result: the same function in C > implemented without SPI (usign table_beginscan/heap_getnext) > Looks like SPI adds quite significant overhead. > And as far as almost all PL languages are using SPI, them all suffer > from it. As far as looking the attached spitest.c, it seems that the overhead comes from cursor operation, not from SPI. As far as spitest.c goes, cursor is useless. "SQL" and C/coreAPI seem to be scanning over the result from *a single* query. If that's correct, why don't you use SPI_execute() then scan over SPI_tuptable? regards. -- Kyotaro Horiguchi NTT Open Source Software Center |
On 22.08.2019 5:40, Kyotaro Horiguchi wrote: > Hello. > > At Wed, 21 Aug 2019 19:41:08 +0300, Konstantin Knizhnik <[hidden email]> wrote in <[hidden email]> >> Hi, hackers. >> >> One of our customers complains about slow execution of PL/pgSQL >> functions comparing with Oracle. >> So he wants to compile PL/pgSQL functions (most likely just-in-time >> compilation). >> Certainly interpreter adds quite large overhead comparing with native >> code (~10 times) but >> most of PL/pgSQL functions are just running some SQL queues and >> iterating through results. >> >> I can not believe that JIT can significantly speed-up such functions. >> So I decided to make simple experiment: I created large enough table >> and implemented functions >> which calculates norm of one column in different languages. >> >> Results are frustrating (at least for me): >> >> PL/pgSQL: 29044.361 ms >> C/SPI: 22785.597 ms >> С/coreAPI: 2873.072 ms >> PL/Lua: 33445.520 ms >> SQL: 7397.639 ms (with parallel execution disabled) >> >> The fact that difference between PL/pgSQL and function implemented in >> C using SPI is not so large was expected by me. >> But why it is more than 3 time slower than correspondent SQL query? >> The answer seems to be in the third result: the same function in C >> implemented without SPI (usign table_beginscan/heap_getnext) >> Looks like SPI adds quite significant overhead. >> And as far as almost all PL languages are using SPI, them all suffer >> from it. > As far as looking the attached spitest.c, it seems that the > overhead comes from cursor operation, not from SPI. As far as > spitest.c goes, cursor is useless. "SQL" and C/coreAPI seem to > be scanning over the result from *a single* query. If that's > correct, why don't you use SPI_execute() then scan over > SPI_tuptable? Scanned table is very large and doesn't fir in memory. This is why I am using SPI cursors. Please let me know if there is more efficient way to traverse larger table using SPI. > > regards. > -- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company |
In reply to this post by Tsunakawa, Takayuki
On 22.08.2019 3:27, Tsunakawa, Takayuki
wrote:
From: Konstantin Knizhnik [[hidden email]]PL/pgSQL: 29044.361 ms C/SPI: 22785.597 ms The fact that difference between PL/pgSQL and function implemented in C using SPI is not so large was expected by me.This PL/pgSQL overhead is not so significant compared with the three times, but makes me desire some feature like Oracle's ALTER PROCEDURE ... COMPILE; that compiles the PL/SQL logic to native code. I've seen a few dozen percent speed up. Actually my implementation of C/SPI version is not optimal: it is better to fetch several records: while (true) { SPI_cursor_fetch(portal, true, 100); if (SPI_processed) { for (i = 0; i < SPI_processed; i++) { HeapTuple spi_tuple = SPI_tuptable->vals[i]; Datum val = SPI_getbinval(spi_tuple, SPI_tuptable->tupdesc, 1, &is_null); double x = DatumGetFloat8(val); result += x*x; SPI_freetuple(spi_tuple); } SPI_freetuptable(SPI_tuptable); } else break; } This version shows result 9405.694 ms which is comparable with result of SQL query. Unfortunately (or fortunately) PL/pgSQL is already using prefetch. If it is disables (when iterate through explicitly created cursor), time of query execution is increased almost twice (46552.935 ms) So PL/SPI ratio is more than three times. Updatede results are the following:
Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company |
Some more information...
First of all I found out that marking PL/pgSQL function as immutable significantly increase speed of its execution: 19808 ms vs. 27594. It happens because exec_eval_simple_expr is taken snapshot if function is volatile (default). I wonder if PL/pgSQL compiler can detect that evaluated expression itself is actually immutable and there is no need to take snapshot for each invocation of this function. Also I have tried yet another PL language - JavaScript, which is now new outsider, despite to the fact that v8 JIT compiler is very good.
-- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company |
čt 22. 8. 2019 v 17:51 odesílatel Konstantin Knizhnik <[hidden email]> napsal:
I have a plan to do some work in this direction. Snapshot is not necessary for almost buildin functions. If expr calls only buildin functions, then probably can be called without snapshot and without any work with plan cache. Pavel
|
On 22.08.2019 18:56, Pavel Stehule
wrote:
I wonder if the following simple patch is correct? -- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company |
pá 23. 8. 2019 v 11:05 odesílatel Konstantin Knizhnik <[hidden email]> napsal:
You cannot to believe to user defined functions so immutable flag is correct. Only buildin functions are 100% correct. CREATE OR REPLACE FUNCTION foo() RETURNS int AS $$ SELECT count(*) FROM pg_class; $$ LANGUAGE sql IMMUTABLE; is working. But your patch is good enough for benchmarking. Pavel
|
On 23.08.2019 12:10, Pavel Stehule
wrote:
But such definition of the function contradicts IMMUTABLE contract, doesn't it? If creator of the function incorrectly classify it, then usage of such function can cause incorrect behavior. For example, if function is marked as "parallel safe" but actually it is not parallel safe, then using it in parallel plan may cause incorrect results. But it is a reason for disabling parallel plans for all user defined functions, isn't it? Also nothing terrible will happen in any case. If expression is calling function which is marked is immutable but actually is not, then we can just get old (deteriorated) result of expression. Right now, if caller function (one containing evaluated expression) is marked as non-volatile, then snapshot is also not taken. So if such function expression is calling foo() function as declared above, then results will be also incorrect. So I do not think some principle difference here and do not understand why we should not believe user (function creator) only in this case. -- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company |
pá 23. 8. 2019 v 13:21 odesílatel Konstantin Knizhnik <[hidden email]> napsal:
In reality it is not IMMUTABLE function. On second hand, there are lot of application that depends on this behave. It is well know trick how to reduce estimation errors related to JOINs. When immutable function has constant parameters, then it is evaluated in planning time. So sometimes was necessary to use SELECT ... FROM tab WHERE foreign_key = immutable_function('constant parameter') instead JOIN. It is ugly, but it is working perfectly. I think so until we will have multi table statistics, this behave should be available in Postgres. Sure, this function should not be used for functional indexes.
|
On 23.08.2019 14:42, Pavel Stehule
wrote:
What about the following version of the patch? -- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company |
In reply to this post by Pavel Stehule
On Fri, Aug 23, 2019 at 11:10:28AM +0200, Pavel Stehule wrote:
> pá 23. 8. 2019 v 11:05 odesílatel Konstantin Knizhnik < > [hidden email]> napsal: > > > > > > > On 22.08.2019 18:56, Pavel Stehule wrote: > > > > > > > > čt 22. 8. 2019 v 17:51 odesílatel Konstantin Knizhnik < > > [hidden email]> napsal: > > > >> Some more information... > >> First of all I found out that marking PL/pgSQL function as immutable > >> significantly increase speed of its execution: > >> 19808 ms vs. 27594. It happens because exec_eval_simple_expr is taken > >> snapshot if function is volatile (default). > >> I wonder if PL/pgSQL compiler can detect that evaluated expression itself > >> is actually immutable and there is no need to take snapshot > >> for each invocation of this function. Also I have tried yet another PL > >> language - JavaScript, which is now new outsider, despite to the fact that > >> v8 JIT compiler is very good. > >> > > > > I have a plan to do some work in this direction. Snapshot is not necessary > > for almost buildin functions. If expr calls only buildin functions, then > > probably can be called without snapshot and without any work with plan > > cache. > > > > > > I wonder if the following simple patch is correct? > > > > You cannot to believe to user defined functions so immutable flag is > correct. Only buildin functions are 100% correct. > > CREATE OR REPLACE FUNCTION foo() > RETURNS int AS $$ > SELECT count(*) FROM pg_class; > $$ LANGUAGE sql IMMUTABLE; > > is working. No, it's lying to the RDBMS, so it's pilot error. The problem of determining from the function itself whether it is in fact immutable is, in general, equivalent to the Halting Problem, so no, we can't figure it out. We do need to trust our users not to lie to us, and we do not need to protect them from the consequences when they do. Best, David. -- David Fetter <david(at)fetter(dot)org> http://fetter.org/ Phone: +1 415 235 3778 Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate |
so 24. 8. 2019 v 18:01 odesílatel David Fetter <[hidden email]> napsal: On Fri, Aug 23, 2019 at 11:10:28AM +0200, Pavel Stehule wrote: I have not any problem with fixing this behave when there will be any alternative. I can imagine new special flag that can be used for STABLE functions, that enforce one shot plans and can be optimized similar like IMMUTABLE functions now - using result in planning time. The users lie because they must - there is not a alternative. There is not any other solution - and estimation errors related to a joins are fundamental issue. Regards Pavel
|
In reply to this post by David Fetter
On Sat, Aug 24, 2019 at 12:01 PM David Fetter <[hidden email]> wrote:
> No, it's lying to the RDBMS, so it's pilot error. The problem of > determining from the function itself whether it is in fact immutable > is, in general, equivalent to the Halting Problem, so no, we can't > figure it out. We do need to trust our users not to lie to us, and we > do not need to protect them from the consequences when they do. Depends. I don't mind if mislabeling a function leads to "wrong" query results, but I don't think it's OK for it to, say, crash the server. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company |
In reply to this post by Pavel Stehule
On 24.08.2019 19:13, Pavel Stehule
wrote:
Pavel, I wonder if I can put my patch (with fix which performs this optimization only for built-in functions) to commitfest or you prefer to do it yourself in some other way and propose your own solution? Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company |
pá 13. 9. 2019 v 9:09 odesílatel Konstantin Knizhnik <[hidden email]> napsal:
I think so your patch is good enough for commitfest. It doesn't remove all overhead - I think so there is lot of overhead related to plan cache, but it in good direction. Probably for these expressions is our final target using a cached JIT - but nobody knows when it will be. I'll not have to time for my experiments before October. |
On 13.09.2019 10:16, Pavel Stehule
wrote:
This is profile of execution of PL/pgSQL function with my patch: 5.39% postgres plpgsql.so [.] exec_assign_value 5.10% postgres postgres [.] ExecInterpExpr 4.70% postgres postgres [.] tts_buffer_heap_getsomeattrs 4.56% postgres plpgsql.so [.] exec_move_row_from_fields 3.87% postgres postgres [.] ExecScan 3.74% postgres plpgsql.so [.] exec_eval_expr 3.64% postgres postgres [.] heap_form_tuple 3.13% postgres postgres [.] heap_fill_tuple 3.07% postgres postgres [.] heapgettup_pagemode 2.95% postgres postgres [.] heap_deform_tuple 2.92% postgres plpgsql.so [.] plpgsql_param_eval_var 2.64% postgres postgres [.] HeapTupleSatisfiesVisibility 2.61% postgres postgres [.] AcquirePlannerLocks 2.58% postgres postgres [.] AcquireExecutorLocks 2.43% postgres postgres [.] GetCachedPlan 2.26% postgres plpgsql.so [.] exec_stmt 2.23% postgres plpgsql.so [.] exec_cast_value 1.89% postgres postgres [.] AllocSetAlloc 1.75% postgres postgres [.] palloc0 1.73% postgres plpgsql.so [.] exec_move_row 1.73% postgres postgres [.] OverrideSearchPathMatchesCurrent 1.69% postgres plpgsql.so [.] assign_simple_var 1.63% postgres postgres [.] heap_getnextslot 1.60% postgres postgres [.] SPI_plan_get_cached_plan 1.55% postgres postgres [.] heapgetpage 1.47% postgres postgres [.] heap_compute_data_size 1.46% postgres postgres [.] spi_printtup 1.43% postgres postgres [.] float8mul 1.37% postgres postgres [.] RevalidateCachedQuery 1.36% postgres postgres [.] standard_ExecutorRun 1.35% postgres postgres [.] recomputeNamespacePath 1.28% postgres postgres [.] ExecStoreBufferHeapTuple 1.25% postgres postgres [.] MemoryContextReset 1.22% postgres plpgsql.so [.] exec_eval_cleanup.isra.18 1.20% postgres plpgsql.so [.] exec_assign_expr 1.05% postgres postgres [.] SeqNext 1.04% postgres postgres [.] ResourceArrayRemove 1.00% postgres postgres [.] ScanQueryForLocks Based on this profile it seems to me that plan cache overhead is relatively small: 2.43%+1.60%+1.37% < 6% But from the other side ExecInterpExpr itself takes also about 5%. I do not completely understand why JIT is not currently used for evaluation of SPI expressions (why we call ExecInterpExpr and do not try to compile this expression even if JIT is enabled). But event if we do it and improve speed of expression evaluation 10 or more time, looks like that effect on total query execution time will be also negligible (5%). Most of the time is spent in pl_exec code, heap traversal , unpacking and copying tuple data. Looks like it can not be easily optimized and requires serious rewriting of PL/pgSQL stuff. -- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company |
Hi I testing very simple function create or replace function f1(int) returns int as $$ declare i int = 0; begin while i < $1 loop i = i + 1; end loop; return i; end $$ language plpgsql immutable; profile - when function is marked as immutable 8,65% postgres [.] ExecInterpExpr ▒ 8,59% postgres [.] AcquireExecutorLocks ▒ 6,95% postgres [.] OverrideSearchPathMatchesCurrent ▒ 5,72% plpgsql.so [.] plpgsql_param_eval_var ▒ 5,15% postgres [.] AcquirePlannerLocks ▒ 4,54% postgres [.] RevalidateCachedQuery ▒ 4,52% postgres [.] GetCachedPlan ▒ 3,82% postgres [.] ResourceArrayRemove ▒ 2,87% postgres [.] SPI_plan_get_cached_plan ▒ 2,80% plpgsql.so [.] exec_eval_expr ▒ 2,70% plpgsql.so [.] exec_assign_value ▒ 2,55% plpgsql.so [.] exec_stmt ▒ 2,53% postgres [.] recomputeNamespacePath ▒ 2,39% plpgsql.so [.] exec_cast_value ▒ 2,19% postgres [.] int4pl ▒ 2,13% postgres [.] int4lt ▒ 1,98% postgres [.] CheckCachedPlan volatile 7,21% postgres [.] GetSnapshotData 6,92% plpgsql.so [.] exec_eval_simple_expr 5,79% postgres [.] AcquireExecutorLocks 5,57% postgres [.] ExecInterpExpr 4,12% postgres [.] LWLockRelease 3,68% postgres [.] OverrideSearchPathMatchesCurrent 3,64% postgres [.] PopActiveSnapshot 3,36% plpgsql.so [.] plpgsql_param_eval_var 3,31% postgres [.] LWLockAttemptLock 3,13% postgres [.] AllocSetAlloc 2,91% postgres [.] GetCachedPlan 2,79% postgres [.] MemoryContextAlloc 2,76% postgres [.] AcquirePlannerLocks 2,70% postgres [.] ResourceArrayRemove 2,45% postgres [.] PushActiveSnapshot 2,44% postgres [.] RevalidateCachedQuery 2,29% postgres [.] SPI_plan_get_cached_plan 2,18% postgres [.] CopySnapshot 1,95% postgres [.] AllocSetFree 1,81% postgres [.] LWLockAcquire 1,71% plpgsql.so [.] exec_assign_value 1,61% plpgsql.so [.] exec_stmt 1,59% plpgsql.so [.] exec_eval_expr 1,48% postgres [.] int4pl 1,48% postgres [.] CheckCachedPlan 1,40% plpgsql.so [.] exec_cast_value 1,39% postgres [.] int4lt 1,38% postgres [.] recomputeNamespacePath 1,25% plpgsql.so [.] exec_eval_cleanup 1,08% postgres [.] ScanQueryForLocks 1,01% plpgsql.so [.] exec_eval_boolean 1,00% postgres [.] pfree For tested function almost all CPU should be used for int4pl and int4lt functions - but there are used only 4% together. I think so almost all of
8,59% postgres [.] AcquireExecutorLocks
▒ 6,95% postgres [.] OverrideSearchPathMatchesCurrent ▒ 5,72% plpgsql.so [.] plpgsql_param_eval_var ▒ 5,15% postgres [.] AcquirePlannerLocks ▒ 4,54% postgres [.] RevalidateCachedQuery ▒ 4,52% postgres [.] GetCachedPlan ▒ 3,82% postgres [.] ResourceArrayRemove ▒ 2,87% postgres [.] SPI_plan_get_cached_plan ▒ 2,53% postgres [.] recomputeNamespacePath ▒ can be reduced if we know so we should to call just builtin immutable V1 functions. My example is a extrem - when you use any embedded SQL, then the profile will be significantly changed. But for some cases there can be nice some significant speedup of expressions only functions (like PostGIS) Regards Pavel |
In reply to this post by konstantin knizhnik
Hi pá 23. 8. 2019 v 16:32 odesílatel Konstantin Knizhnik <[hidden email]> napsal:
I am sending review of this small patch. This small patch reduce a overhead of usage buildin immutable functions in volatile functions with simple trick. Starts snapshot only when it is necessary. In decrease runtime time about 25 % on this small example. do $$ declare i int; begin i := 0; while i < 10000000 loop i := i + 1; end loop; end; $$; If there are more expressions, then speedup can be more interesting. If there are other bottlenecks, then the speedup will be less. 25% is not bad, so we want to this feature. I believe so similar method can be used more aggressively with more significant performance benefit, but this is low hanging fruit and isn't reason to wait for future. This patch doesn't introduce any new feature, so new tests and new doc is not necessary. The patch is readable, well formatted, only comments are too long. I fixed it. All tests passed I fixed few warnings, and I reformated little bit function expr_needs_snapshot to use if instead case, what is more usual in these cases. I think so this code can be marked as ready for commit Regards Pavel
|
Free forum by Nabble | Edit this page |