Table as argument in postgres function

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Table as argument in postgres function

RAJIN RAJ K
I'm trying to convert SAP Hana procedures in PG and i'm not able to handle below scenario in Postgres 11

Scenario: I want to pass a table (Multiple rows) to function and use it inside as a temp table. 

Sample Code:

create a table tbl_id (id int, name character varying (10));
insert few rows to tbl_id;
create a function myfun (in tt_table <How to define a table type here> )
begin
return setof table(few columns)
begin 
as 
select id,name into lv_var1,lv_var2;
from tt_table --> Want to use the input table 
where id = <some value>;
return query 
select *
from tbl2 where id in (select id from tt_table); --> Want to use the input table 
end;
I don't want to go with dynamic sql, is there any other way to declare a table as input argument and use it a normal temp table inside the function body? 
--> Function invocation issue:
select * from myfun(tbl_id);
How to invoke a function by passing a table as argument? 
Reply | Threaded
Open this post in threaded view
|

Re: Table as argument in postgres function

Corey Huinker

You can pass table name as text or table object id as regclass type.

inside procedure you should to use dynamic sql - execute statement. Generally you cannot to use a variable as table or column name ever.

Dynamic SQL is other mechanism - attention on SQL injection.

On this note, Snowflake has the ability to to parameterize object names (see: https://docs.snowflake.net/manuals/sql-reference/identifier-literal.html )

So you can do things like
    SELECT col_a, col_b FROM identifier('a_table_name')
or as a bind variable
    SELECT col_a, col_b FROM identifier($1)

Which is their way of avoiding SQL injection attacks in some circumstances. Their implementation of it is a bit uneven, but it has proven useful for my work.

I can see where this obviously would prevent the planning of a prepared statement when a table name is a parameter, but the request comes up often enough, and the benefits to avoiding SQL injection attacks are significant enough that maybe we should try to enable it for one-off. I don't necessarily think we need an identifier(string) function, a 'schema.table'::regclass would be more our style.

Is there anything preventing us from having the planner resolve object names from strings?
Reply | Threaded
Open this post in threaded view
|

Re: Table as argument in postgres function

Pavel Stehule


po 20. 5. 2019 v 7:56 odesílatel Corey Huinker <[hidden email]> napsal:

You can pass table name as text or table object id as regclass type.

inside procedure you should to use dynamic sql - execute statement. Generally you cannot to use a variable as table or column name ever.

Dynamic SQL is other mechanism - attention on SQL injection.

On this note, Snowflake has the ability to to parameterize object names (see: https://docs.snowflake.net/manuals/sql-reference/identifier-literal.html )

So you can do things like
    SELECT col_a, col_b FROM identifier('a_table_name')
or as a bind variable
    SELECT col_a, col_b FROM identifier($1)

Which is their way of avoiding SQL injection attacks in some circumstances. Their implementation of it is a bit uneven, but it has proven useful for my work.

I can see where this obviously would prevent the planning of a prepared statement when a table name is a parameter, but the request comes up often enough, and the benefits to avoiding SQL injection attacks are significant enough that maybe we should try to enable it for one-off. I don't necessarily think we need an identifier(string) function, a 'schema.table'::regclass would be more our style.

Is there anything preventing us from having the planner resolve object names from strings?

The basic problem is fact so when you use PREPARE, EXECUTE protocol, you has not parameters in planning time. 

Regards

Pavel
Reply | Threaded
Open this post in threaded view
|

Re: Table as argument in postgres function

Corey Huinker

Is there anything preventing us from having the planner resolve object names from strings?

The basic problem is fact so when you use PREPARE, EXECUTE protocol, you has not parameters in planning time.

I agree that it defeats PREPARE as it is currently implemented with PQprepare(), and it would never be meaningful to have a query plan that hasn't finalized which objects are involved.

But could it be made to work with PQexecParams(), where the parameter values are already provided?

Could we make a version of PQprepare() that takes an extra array of paramValues for object names that must be supplied at prepare-time?



 
Reply | Threaded
Open this post in threaded view
|

Re: Table as argument in postgres function

Pavel Stehule


út 21. 5. 2019 v 9:04 odesílatel Corey Huinker <[hidden email]> napsal:

Is there anything preventing us from having the planner resolve object names from strings?

The basic problem is fact so when you use PREPARE, EXECUTE protocol, you has not parameters in planning time.

I agree that it defeats PREPARE as it is currently implemented with PQprepare(), and it would never be meaningful to have a query plan that hasn't finalized which objects are involved.

But could it be made to work with PQexecParams(), where the parameter values are already provided?

Could we make a version of PQprepare() that takes an extra array of paramValues for object names that must be supplied at prepare-time?

I think so it is possible, but there is a question how much this design uglify source code. Passing query parameters is maybe too complex already.

Second question. I am not sure if described feature is some different. ANSI SQL 2016 knows Polymorphic table functions - looks like that. For me, I would to see implementation of PTF instead increasing complexity of work with parameters.