executor relation handling

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

executor relation handling

Amit Langote-2
This is the continuation of the discussion at:

https://www.postgresql.org/message-id/7500.1531920772%40sss.pgh.pa.us

Actually, for more background on what I've written below, reading this
email in the same discussion would help:

https://www.postgresql.org/message-id/4114.1531674142@...


Attached find a series of patches, the first of which tries to implement
the main topic discussed in the above email, which is to eliminate
execution-time locking of relations in PlannedStmt.rtable.  One of the
other patches removes the plan node fields that are obsoleted by
eliminating execution-time locking of relations.  Those fields served no
purpose beside telling the executor which relations to lock, or more
precisely which relations to lock before initializing the plan tree so
that we don't end up upgrading lock strength due to same relation being
both a source relation and target relation.

When working on that, I noticed that planner fails to remove PlanRowMarks
of relations that won't be scanned by a given plan, which results in
executor redundantly locking relations that the planner already deemed
unnecessary to scan.  The locking would be gone with one of the proposed
patches, but there are still a couple of overheads during executor
initialization of having all those PlanRowMarks.  For example,
ExecInitLockRows or ExecInitModifyTable calling ExecFindRowMark would end
up going over PlanRowMarks that won't ever be used, which especially grows
worse with many partitions.

Short description of each patch follows:

0001-Don-t-lock-range-table-relations-in-the-executor.patch

This removes all instances of heap_open(relid, <not-NoLock>) in the
executor code with heap_open(relid, NoLock).  To verify that an
appropriate lock is already taken on a relation by the time we get into
executor, this also installs an assert-build-only check that confirms that
lmgr indeed holds the lock.  To remember which lock was taken when
creating a given RTE_RELATION range table entry, this adds a lockmode
field to RangeTblEntry.  Finally, because we don't lock in the executor
and hence there are no concerns about lock strength upgrade hazard,
InitPlan doesn't need toinitialize ResultRelInfos and ExecRowMarks, in
favor of doing that in the ExecInit* routines of respective nodes which
need those ResultRelInfos and ExecLockRows.

(This doesn't touch index relations though, as they don't have a range
table entry.)

0002-Remove-useless-fields-from-planner-nodes.patch

This removes some fields from PlannedStmt whose only purpose currently is
to help InitPlan do certain things that it no longer does, as mentioned
above.  Also, some fields in Append, MergeAppend, ModifyTable, whose only
purpose currently is to propagate partitioned table RT indexes so that
executor could lock them, are removed.  Removing them also means that the
planner doesn't have to spend cycles initializing them.

0003-Prune-PlanRowMark-of-relations-that-are-pruned-from-.patch

This prevents PlanRowMarks corresponding to relations that won't be
scanned by a given plan from appearing in the rowMarks field of LockRows
or ModifyTable nodes.  This results in removing significant overhead from
the executor initialization of those nodes, especially for partitioned
tables with many partitions.

0004-Revise-executor-range-table-relation-opening-closing.patch

This adds two arrays to EState indexed by RT indexes, one for
RangeTblEntry's and another for Relation pointers.  The former reduces the
cost of fetching RangeTblEntry by RT index.  The latter is used by a newly
introduced function ExecRangeTableRelation(), which replaces heap_open for
relations that are contained in the range table.  If a given RTE's
relation is opened by multiple times, only the first call of
ExecRangeTableRelation will go to relcache.



Although improving executor's performance is not the main goal of these
patches, the fact that we're getting rid of redundant processing means
there would be at least some speedup, especially with large number of
relations in the range table, such as with partitioned tables with many
partitions.

* Benchmark script used:

$ cat /tmp/select-lt.sql
select * from lt where b = 999;

'lt' above is a list-partitioned table with 1000 partitions, with one
partition for each value in the range 1..1000.


$ for i in 1 2;
> do
> pgbench -n -Mprepared -T 60 -f /tmp/select-lt.sql
> done

master

tps = 768.172129 (excluding connections establishing)
tps = 764.180834 (excluding connections establishing)

patch 0001 (no locking in the executor)

tps = 775.060323 (excluding connections establishing)
tps = 778.772917 (excluding connections establishing)

patch 0002 (remove useless planner node fields)

tps = 782.165436 (excluding connections establishing)
tps = 759.417411 (excluding connections establishing

patch 0003 (prune PlanRowMarks)

tps = 783.558539 (excluding connections establishing)
tps = 776.106055 (excluding connections establishing)

patch 0004 (executor range table Relation open)

tps = 778.924649 (excluding connections establishing)
tps = 769.093431 (excluding connections establishing)


Speedup is more pronounced with a benchmark that needs RowMarks, because
one of the patches (0003) removes overhead around handling them.

$ cat /tmp/select-lt-for-share.sql
select * from lt where b = 999 for share;

master

tps = 94.095985 (excluding connections establishing)
tps = 93.955702 (excluding connections establishing)

patch 0001 (no locking in the executor)

tps = 199.030555 (excluding connections establishing)
tps = 197.630424 (excluding connections establishing)

patch 0002 (remove useless planner node fields)

tps = 194.384994 (excluding connections establishing)
tps = 195.821362 (excluding connections establishing)

patch 0003 (prune PlanRowMarks)

tps = 712.544029 (excluding connections establishing)
tps = 717.540052 (excluding connections establishing)

patch 0004 (executor range table Relation open)

tps = 725.189715 (excluding connections establishing)
tps = 727.344683 (excluding connections establishing)


Will add this to next CF.

Thanks,
Amit

v1-0001-Don-t-lock-range-table-relations-in-the-executor.patch (41K) Download Attachment
v1-0002-Remove-useless-fields-from-planner-nodes.patch (48K) Download Attachment
v1-0003-Prune-PlanRowMark-of-relations-that-are-pruned-fr.patch (3K) Download Attachment
v1-0004-Revise-executor-range-table-relation-opening-clos.patch (21K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: executor relation handling

Amit Langote-2
On 2018/08/16 17:22, Amit Langote wrote:
> 0004-Revise-executor-range-table-relation-opening-closing.patch
>
> This adds two arrays to EState indexed by RT indexes, one for
> RangeTblEntry's and another for Relation pointers.  The former reduces the
> cost of fetching RangeTblEntry by RT index.  The latter is used by a newly
> introduced function ExecRangeTableRelation(), which replaces heap_open for
> relations that are contained in the range table.  If a given RTE's
> relation is opened by multiple times, only the first call of
> ExecRangeTableRelation will go to relcache.

David Rowely recently, independently [1], proposed one of the ideas
mentioned above (store RangeTblEntry's in an array in EState).  As I
mentioned in reply to his email, I think his implementation of the idea is
better than mine, so I've merged his patch in the above patch, except one
part: instead of removing the es_range_table list altogether, I decided to
keep it around so that ExecSerializePlan doesn't have to build one all
over again from the array like his patch did.

Updated patches attached; 0001-0003 are same as v1.

Thanks,
Amit

[1] Make executor's Range Table an array instead of a List
https://postgr.es/m/CAKJS1f9EypD_=xG6ACFdF=1cBjz+Z9hiHHSd-RqLjor+QyA-nw@...

v2-0001-Don-t-lock-range-table-relations-in-the-executor.patch (41K) Download Attachment
v2-0002-Remove-useless-fields-from-planner-nodes.patch (48K) Download Attachment
v2-0003-Prune-PlanRowMark-of-relations-that-are-pruned-fr.patch (3K) Download Attachment
v2-0004-Revise-executor-range-table-relation-opening-clos.patch (31K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: executor relation handling

David Rowley-3
On 4 September 2018 at 20:53, Amit Langote
<[hidden email]> wrote:
> Updated patches attached; 0001-0003 are same as v1.

I've looked at these. Here's my review so far:

0001:

1. The following does not seem to be true any longer:

+ /*
+ * If you change the conditions under which rel locks are acquired
+ * here, be sure to adjust ExecOpenScanRelation to match.
+ */

per:

@@ -652,28 +654,10 @@ ExecOpenScanRelation(EState *estate, Index
scanrelid, int eflags)
 {
  Relation rel;
  Oid reloid;
- LOCKMODE lockmode;

- /*
- * Determine the lock type we need.  First, scan to see if target relation
- * is a result relation.  If not, check if it's a FOR UPDATE/FOR SHARE
- * relation.  In either of those cases, we got the lock already.
- */
- lockmode = AccessShareLock;
- if (ExecRelationIsTargetRelation(estate, scanrelid))
- lockmode = NoLock;
- else
- {
- /* Keep this check in sync with InitPlan! */
- ExecRowMark *erm = ExecFindRowMark(estate, scanrelid, true);
-
- if (erm != NULL && erm->relation != NULL)
- lockmode = NoLock;
- }
-

2. Should addRangeTableEntryForRelation() initialize lockmode, or
maybe take it as a parameter?

3. Don't think there's a need to capitalise true and false in:

+ * Return TRUE if we acquired a new lock, FALSE if already held.

4. The comment probably should read "lock level to obtain, or 0 if no
lock is required" in:

+ int lockmode; /* lock taken on the relation or 0 */

The field should likely also be LOCKMODE, not int.

5. AcquireExecutorLocks() does not need the local variable named rt_index

0002:

6. I don't think "rootRelation" is a good name for this field. I think
"root" is being confused with "target". Nothing is it say the target
is the same as the root.

+ Index rootRelation; /* RT index of root partitioned table */

Perhaps "partitionedTarget" is a better name?

7. Should use "else" instead of "else if" in:

+ /* Top-level Plan must be LockRows or ModifyTable */
+ Assert(IsA(stmt->planTree, LockRows) ||
+    IsA(stmt->planTree, ModifyTable));
+ if (IsA(stmt->planTree, LockRows))
+ rowMarks = ((LockRows *) stmt->planTree)->rowMarks;
+ else if (IsA(stmt->planTree, ModifyTable))
+ rowMarks = ((ModifyTable *) stmt->planTree)->rowMarks;

or you'll likely get a compiler warning on non-Assert enabled builds.

0003:

8. The following code seems repeated enough to warrant a static function:

+ rowMarks = NIL;
+ foreach(lc, root->rowMarks)
+ {
+ PlanRowMark *rc = lfirst(lc);
+
+ if (root->simple_rel_array[rc->rti] != NULL &&
+ IS_DUMMY_REL(root->simple_rel_array[rc->rti]))
+ continue;
+
+ rowMarks = lappend(rowMarks, rc);
+ }

Also, why not reverse the condition and do the lappend inside the if?
Save two lines.

9. The following code appears in copy.c, which is pretty much the same
as the code in execMain.c:

estate->es_range_table_size = list_length(cstate->range_table);
estate->es_range_table_array = (RangeTblEntry **)
palloc(sizeof(RangeTblEntry *) *
   estate->es_range_table_size);
/* Populate the range table array */
i = 0;
foreach(lc, cstate->range_table)
estate->es_range_table_array[i++] = lfirst_node(RangeTblEntry, lc);

Would it not be better to invent a function with the signature:

void
setup_range_table_array(EState *estate, List *rangeTable)

and use it in both locations?

10. In create_estate_for_relation() I don't think you should remove
the line that sets es_range_table.

@@ -199,7 +199,8 @@ create_estate_for_relation(LogicalRepRelMapEntry *rel)
  rte->rtekind = RTE_RELATION;
  rte->relid = RelationGetRelid(rel->localrel);
  rte->relkind = rel->localrel->rd_rel->relkind;
- estate->es_range_table = list_make1(rte);

If you're keeping es_range_table then I think it needs to always be
set properly to help prevent future bugs in that area.

11. ExecRangeTableRelation should look more like:

ExecRangeTableRelation(EState *estate, Index rti)
{
  Relation rel = estate->es_relations[rti - 1];

  if (rel != NULL)
    RelationIncrementReferenceCount(rel);
  else
  {
    RangeTblEntry *rte = exec_rt_fetch(rti, estate->es_range_table_array);

   /*
    * No need to lock the relation lock, because upstream code
    * must hold the lock already.
    */
    rel = estate->es_relations[rti - 1] = heap_open(rte->relid, NoLock);
  }

  return rel;
}

12. I think this should read: /* Fill in RTEs. es_relations will be
populated later. */

+ /* Fill the RTEs, Relations array will be filled later. */

13. I also notice that you're still cleaning up Relations with
heap_close or relation_close. Did you consider not doing that and just
having a function that's run at the end of execution which closes all
non-NULL es_relations? This way you'd not need to perform
RelationIncrementReferenceCount inside ExecRangeTableRelation.

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Reply | Threaded
Open this post in threaded view
|

Re: executor relation handling

Amit Langote-2
Thank you for reviewing.

On 2018/09/10 13:36, David Rowley wrote:

> On 4 September 2018 at 20:53, Amit Langote
> <[hidden email]> wrote:
>> Updated patches attached; 0001-0003 are same as v1.
>
> I've looked at these. Here's my review so far:
>
> 0001:
>
> 1. The following does not seem to be true any longer:
>
> + /*
> + * If you change the conditions under which rel locks are acquired
> + * here, be sure to adjust ExecOpenScanRelation to match.
> + */
>
> per:
>
> @@ -652,28 +654,10 @@ ExecOpenScanRelation(EState *estate, Index
> scanrelid, int eflags)
>  {
>   Relation rel;
>   Oid reloid;
> - LOCKMODE lockmode;
>
> - /*
> - * Determine the lock type we need.  First, scan to see if target relation
> - * is a result relation.  If not, check if it's a FOR UPDATE/FOR SHARE
> - * relation.  In either of those cases, we got the lock already.
> - */
> - lockmode = AccessShareLock;
> - if (ExecRelationIsTargetRelation(estate, scanrelid))
> - lockmode = NoLock;
> - else
> - {
> - /* Keep this check in sync with InitPlan! */
> - ExecRowMark *erm = ExecFindRowMark(estate, scanrelid, true);
> -
> - if (erm != NULL && erm->relation != NULL)
> - lockmode = NoLock;
> - }
> -
Yeah, removed that comment in ExecBuildRowMark.

> 2. Should addRangeTableEntryForRelation() initialize lockmode, or
> maybe take it as a parameter?

Hmm, that sounds like a good idea.  Looking at the various places it's
called from though, it's not clear in many instances which lock was taken
on the relation that's passed to it, because the lock itself seems to be
taken several stack frames removed from the call site.

That said, at least for the cases that we care about, that is, the cases
in which the RTE being built will be passed to the executor by the way of
being in a PlannedStmt, it's clear which lock is taken.  So, we can add
the lockmode parameter to addRangeTableEntryForRelation as you suggest and
pass the actual lockmode value only in the cases we care about, mentioning
the fact in the function's header comment that callers may pass NoLock
arbitrarily if it's clear the RTE won't be passed to the executor.

I've modified patch that way.  Thoughts?

> 3. Don't think there's a need to capitalise true and false in:
>
> + * Return TRUE if we acquired a new lock, FALSE if already held.

OK, fixed.

> 4. The comment probably should read "lock level to obtain, or 0 if no
> lock is required" in:
>
> + int lockmode; /* lock taken on the relation or 0 */

OK.

> The field should likely also be LOCKMODE, not int.

OK.

> 5. AcquireExecutorLocks() does not need the local variable named rt_index

Good catch, removed.

> 0002:
>
> 6. I don't think "rootRelation" is a good name for this field. I think
> "root" is being confused with "target". Nothing is it say the target
> is the same as the root.
>
> + Index rootRelation; /* RT index of root partitioned table */
>
> Perhaps "partitionedTarget" is a better name?

I realized that we don't need a new Index field here.  nominalRelation
serves the purpose that rootRelation is meant for, so it seems silly to
have two fields of the same value.  Instead, let's have a bool
partitionedTarget which is set to true if the target (whose RT index is
nominalRelation) is a partitioned tables.

> 7. Should use "else" instead of "else if" in:
>
> + /* Top-level Plan must be LockRows or ModifyTable */
> + Assert(IsA(stmt->planTree, LockRows) ||
> +    IsA(stmt->planTree, ModifyTable));
> + if (IsA(stmt->planTree, LockRows))
> + rowMarks = ((LockRows *) stmt->planTree)->rowMarks;
> + else if (IsA(stmt->planTree, ModifyTable))
> + rowMarks = ((ModifyTable *) stmt->planTree)->rowMarks;
>
> or you'll likely get a compiler warning on non-Assert enabled builds.
Yep, fixed.

> 0003:
>
> 8. The following code seems repeated enough to warrant a static function:
>
> + rowMarks = NIL;
> + foreach(lc, root->rowMarks)
> + {
> + PlanRowMark *rc = lfirst(lc);
> +
> + if (root->simple_rel_array[rc->rti] != NULL &&
> + IS_DUMMY_REL(root->simple_rel_array[rc->rti]))
> + continue;
> +
> + rowMarks = lappend(rowMarks, rc);
> + }
>
> Also, why not reverse the condition and do the lappend inside the if?
> Save two lines.
OK, made this change and added a static function called
get_unpruned_rowmarks(PlannerInfo *root).

>
> 9. The following code appears in copy.c, which is pretty much the same
> as the code in execMain.c:
>
> estate->es_range_table_size = list_length(cstate->range_table);
> estate->es_range_table_array = (RangeTblEntry **)
> palloc(sizeof(RangeTblEntry *) *
>    estate->es_range_table_size);
> /* Populate the range table array */
> i = 0;
> foreach(lc, cstate->range_table)
> estate->es_range_table_array[i++] = lfirst_node(RangeTblEntry, lc);
>
> Would it not be better to invent a function with the signature:
>
> void
> setup_range_table_array(EState *estate, List *rangeTable)
>
> and use it in both locations?
Agreed, but I named it ExecInitRangeTable.

> 10. In create_estate_for_relation() I don't think you should remove
> the line that sets es_range_table.
>
> @@ -199,7 +199,8 @@ create_estate_for_relation(LogicalRepRelMapEntry *rel)
>   rte->rtekind = RTE_RELATION;
>   rte->relid = RelationGetRelid(rel->localrel);
>   rte->relkind = rel->localrel->rd_rel->relkind;
> - estate->es_range_table = list_make1(rte);
>
> If you're keeping es_range_table then I think it needs to always be
> set properly to help prevent future bugs in that area.
My bad, fixed.

> 11. ExecRangeTableRelation should look more like:
>
> ExecRangeTableRelation(EState *estate, Index rti)
> {
>   Relation rel = estate->es_relations[rti - 1];
>
>   if (rel != NULL)
>     RelationIncrementReferenceCount(rel);
>   else
>   {
>     RangeTblEntry *rte = exec_rt_fetch(rti, estate->es_range_table_array);
>
>    /*
>     * No need to lock the relation lock, because upstream code
>     * must hold the lock already.
>     */
>     rel = estate->es_relations[rti - 1] = heap_open(rte->relid, NoLock);
>   }
>
>   return rel;
> }
Much better, done.

> 12. I think this should read: /* Fill in RTEs. es_relations will be
> populated later. */
>
> + /* Fill the RTEs, Relations array will be filled later. */

I've rewritten the comment.

> 13. I also notice that you're still cleaning up Relations with
> heap_close or relation_close. Did you consider not doing that and just
> having a function that's run at the end of execution which closes all
> non-NULL es_relations? This way you'd not need to perform
> RelationIncrementReferenceCount inside ExecRangeTableRelation.

Agreed, done.  I'd be slightly hesitant to remove ExecCloseScanRelation,
ExecDestroyPartitionPruneState et al if they weren't just a wrapper around
heap_close.

Please find attached revised patches.

Thanks,
Amit

v3-0001-Don-t-lock-range-table-relations-in-the-executor.patch (56K) Download Attachment
v3-0002-Remove-useless-fields-from-planner-nodes.patch (50K) Download Attachment
v3-0003-Prune-PlanRowMark-of-relations-that-are-pruned-fr.patch (3K) Download Attachment
v3-0004-Revise-executor-range-table-relation-opening-clos.patch (47K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: executor relation handling

Jesper Pedersen
Hi Amit,

On 9/12/18 1:23 AM, Amit Langote wrote:
> Please find attached revised patches.
>

After applying 0004 I'm getting a crash in 'eval-plan-qual' during
check-world using

export CFLAGS="-DCOPY_PARSE_PLAN_TREES -O0 -fno-omit-frame-pointer" &&
./configure --enable-dtrace --with-openssl --with-gssapi --with-libxml
--with-llvm --enable-debug --enable-depend --enable-tap-tests
--enable-cassert

Confirmed by CFBot in [1].

[1] https://travis-ci.org/postgresql-cfbot/postgresql/builds/427530296

Best regards,
  Jesper

Reply | Threaded
Open this post in threaded view
|

Re: executor relation handling

Amit Langote
On Wed, Sep 12, 2018 at 9:23 PM, Jesper Pedersen
<[hidden email]> wrote:

> Hi Amit,
>
> On 9/12/18 1:23 AM, Amit Langote wrote:
>>
>> Please find attached revised patches.
>>
>
> After applying 0004 I'm getting a crash in 'eval-plan-qual' during
> check-world using
>
> export CFLAGS="-DCOPY_PARSE_PLAN_TREES -O0 -fno-omit-frame-pointer" &&
> ./configure --enable-dtrace --with-openssl --with-gssapi --with-libxml
> --with-llvm --enable-debug --enable-depend --enable-tap-tests
> --enable-cassert
>
> Confirmed by CFBot in [1].
>
> [1] https://travis-ci.org/postgresql-cfbot/postgresql/builds/427530296

Thanks Jesper.  Will look into it first thing tomorrow morning.

Thanks,
Amit

Reply | Threaded
Open this post in threaded view
|

Re: executor relation handling

Amit Langote-2
On 2018/09/13 0:23, Amit Langote wrote:

> On Wed, Sep 12, 2018 at 9:23 PM, Jesper Pedersen
> <[hidden email]> wrote:
>> Hi Amit,
>>
>> On 9/12/18 1:23 AM, Amit Langote wrote:
>>>
>>> Please find attached revised patches.
>>>
>>
>> After applying 0004 I'm getting a crash in 'eval-plan-qual' during
>> check-world using
>>
>> export CFLAGS="-DCOPY_PARSE_PLAN_TREES -O0 -fno-omit-frame-pointer" &&
>> ./configure --enable-dtrace --with-openssl --with-gssapi --with-libxml
>> --with-llvm --enable-debug --enable-depend --enable-tap-tests
>> --enable-cassert
>>
>> Confirmed by CFBot in [1].
>>
>> [1] https://travis-ci.org/postgresql-cfbot/postgresql/builds/427530296
>
> Thanks Jesper.  Will look into it first thing tomorrow morning.
Attached updated patches.

Beside the issue that caused eval-plan-qual isolation test to crash, I
also spotted and fixed an oversight in the 0002 patch which would lead to
EState.es_output_cid being set to wrong value and causing unexpected error
during tuple locking as result of that.

Thanks,
Amit

v4-0001-Don-t-lock-range-table-relations-in-the-executor.patch (56K) Download Attachment
v4-0002-Remove-useless-fields-from-planner-nodes.patch (51K) Download Attachment
v4-0003-Prune-PlanRowMark-of-relations-that-are-pruned-fr.patch (3K) Download Attachment
v4-0004-Revise-executor-range-table-relation-opening-clos.patch (50K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: executor relation handling

Jesper Pedersen
Hi Amit,

On 9/13/18 12:58 AM, Amit Langote wrote:
> Attached updated patches.
>
> Beside the issue that caused eval-plan-qual isolation test to crash, I
> also spotted and fixed an oversight in the 0002 patch which would lead to
> EState.es_output_cid being set to wrong value and causing unexpected error
> during tuple locking as result of that.
>

Thanks for the update.

However, the subscription TAP test
(src/test/subscription/t/001_rep_changes.pl) is still failing.

CFBot has the same log

  https://travis-ci.org/postgresql-cfbot/postgresql/builds/427999969

as locally.

Best regards,
  Jesper

Reply | Threaded
Open this post in threaded view
|

Re: executor relation handling

Amit Langote-2
Thanks again, Jesper.

On 2018/09/13 20:27, Jesper Pedersen wrote:

> Hi Amit,
>
> On 9/13/18 12:58 AM, Amit Langote wrote:
>> Attached updated patches.
>>
>> Beside the issue that caused eval-plan-qual isolation test to crash, I
>> also spotted and fixed an oversight in the 0002 patch which would lead to
>> EState.es_output_cid being set to wrong value and causing unexpected error
>> during tuple locking as result of that.
>>
>
> Thanks for the update.
>
> However, the subscription TAP test
> (src/test/subscription/t/001_rep_changes.pl) is still failing.
>
> CFBot has the same log
>
>  https://travis-ci.org/postgresql-cfbot/postgresql/builds/427999969
>
> as locally.
My bad.  I missed that logical replication code depends on the affected
executor code.

Fixed patches attached.

Thanks,
Amit

v5-0001-Don-t-lock-range-table-relations-in-the-executor.patch (56K) Download Attachment
v5-0002-Remove-useless-fields-from-planner-nodes.patch (51K) Download Attachment
v5-0003-Prune-PlanRowMark-of-relations-that-are-pruned-fr.patch (3K) Download Attachment
v5-0004-Revise-executor-range-table-relation-opening-clos.patch (51K) Download Attachment