doc review for v14

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
24 messages Options
12
Reply | Threaded
Open this post in threaded view
|

doc review for v14

Justin Pryzby
As I did last 2 years, I reviewed docs for v14...

This year I've started early, since it takes more than a little effort and it's
not much fun to argue the change in each individual hunk.

--
Justin Pryzby
System Administrator
Telsasoft
+1-952-707-8581

0001-pgindent-typos.not-a-pxtch (1K) Download Attachment
0001-typos-in-master.patch (3K) Download Attachment
0002-producement-fcec6caafa2346b6c9d3ad5065e417733bd63cd9.patch (1K) Download Attachment
0003-cannot.patch (3K) Download Attachment
0004-Spaces-after-function-arguments.patch (2K) Download Attachment
0005-Fix-partially-updated-comment.patch (1K) Download Attachment
0006-review-docs-for-pg12dev-broken-commas.patch (1K) Download Attachment
0007-pg_restore-must-be-specified-and-list.patch (1K) Download Attachment
0008-pg_dump-fix-pre-existing-docs-comments.patch (1K) Download Attachment
0009-Fix-malformed-comment.patch (1K) Download Attachment
0010-Doc-review-for-min_dynamic_shared_memory-84b1c63ad.patch (1K) Download Attachment
0011-Doc-review-for-pg_stat_replication_slots-986816750.patch (4K) Download Attachment
0012-Doc-review-for-pg_stat_wal-8d9a93596.patch (933 bytes) Download Attachment
0013-Doc-review-for-logical-decoding-stream-methods-45fdc.patch (1K) Download Attachment
0014-Doc-review-for-amcheck-access-.-is-performed-866e24d.patch (1K) Download Attachment
0015-Doc-review-for-prepared-statements-4a36eab79a193700b.patch (1K) Download Attachment
0016-Doc-review-for-WAL-counters-amount-of-bytes.patch (3K) Download Attachment
0017-Doc-review-for-multiranges-6df7a9698.patch (2K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
On Mon, Dec 21, 2020 at 10:11:53PM -0600, Justin Pryzby wrote:
> As I did last 2 years, I reviewed docs for v14...

Thanks for gathering all that!

> This year I've started early, since it takes more than a little effort and it's
> not much fun to argue the change in each individual hunk.

0001-pgindent-typos.not-a-patch touches pg_bsd_indent.

>   /*
> - * XmlTable returns table - set of composite values. The error context, is
> - * used for producement more values, between two calls, there can be
> - * created and used another libxml2 error context. It is libxml2 global
> - * value, so it should be refreshed any time before any libxml2 usage,
> - * that is finished by returning some value.
> + * XmlTable returns a table-set of composite values. The error context is
> + * used for providing more detail. Between two calls, other libxml2
> + * error contexts might have been created and used ; since they're libxml2
> + * global values, they should be refreshed each time before any libxml2 usage
> + * that finishes by returning some value.
>   */
That's indeed incorrect, but I am not completely sure if what you have
here is correct either.  I'll try to study this code a bit more first,
though I have said that once in the past.  :p

> --- a/src/bin/pg_dump/pg_restore.c
> +++ b/src/bin/pg_dump/pg_restore.c
> @@ -305,7 +305,7 @@ main(int argc, char **argv)
>   /* Complain if neither -f nor -d was specified (except if dumping TOC) */
>   if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
>   {
> - pg_log_error("one of -d/--dbname and -f/--file must be specified");
> + pg_log_error("one of -d/--dbname, -f/--file or -l/--list must be specified");
>   exit_nicely(1);
>   }
You have forgotten to update the TAP test pg_dump/t/001_basic.pl.
The message does not seem completely incorrect to me either.  Hmm.
Restraining more the set of options is something to consider, though
it could be annoying.  I have discarded this one for now.

>          Specifies the amount of memory that should be allocated at server
> -        startup time for use by parallel queries.  When this memory region is
> +        startup for use by parallel queries.  When this memory region is
>          insufficient or exhausted by concurrent queries, new parallel queries
>          try to allocate extra shared memory temporarily from the operating
>          system using the method configured with
>          <varname>dynamic_shared_memory_type</varname>, which may be slower due
>          to memory management overheads.  Memory that is allocated at startup
> -        time with <varname>min_dynamic_shared_memory</varname> is affected by
> +        with <varname>min_dynamic_shared_memory</varname> is affected by
>          the <varname>huge_pages</varname> setting on operating systems where
>          that is supported, and may be more likely to benefit from larger pages
>          on operating systems where that is managed automatically.
The current formulation is not that confusing, but I agree that this
is an improvement.  Thomas, you are behind this one.  What do you
think?

I have applied most of it on HEAD, except 0011 and the things noted
above.  Thanks again.
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Justin Pryzby
On Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:
> I have applied most of it on HEAD, except 0011 and the things noted
> above.  Thanks again.

Thank you.

I see that I accidentally included ZSTD_COMPRESSION in pg_backup_archiver.h
while cherry-picking from the branch where I first fixed this.  Sorry :(

> 0001-pgindent-typos.not-a-patch touches pg_bsd_indent.

I'm hoping that someone will apply it there, but I realize that access to its
repository is tightly controlled :)

On Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:
> Restraining more the set of options is something to consider, though
> it could be annoying.  I have discarded this one for now.

Even though its -d is unused, I guess since wouldn't serve any significant
purpose, we shouldn't make pg_restore -l -d fail for no reason.

I think a couple of these should be backpatched.
doc/src/sgml/ref/pg_dump.sgml
doc/src/sgml/sources.sgml
doc/src/sgml/cube.sgml?
doc/src/sgml/func.sgml?

--
Justin

0001-typos-in-master.patch (975 bytes) Download Attachment
0002-producement-fcec6caafa2346b6c9d3ad5065e417733bd63cd9.patch (1K) Download Attachment
0003-cannot.patch (1K) Download Attachment
0004-Doc-review-for-min_dynamic_shared_memory-84b1c63ad.patch (1K) Download Attachment
0005-Doc-review-for-pg_stat_replication_slots-986816750.patch (4K) Download Attachment
0006-pg_restore-must-be-specified-and-list.patch (1K) Download Attachment
0007-Remove-ZSTD_COMPRESSION-accidentially-included-at-90.patch (875 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Magnus Hagander-2


On Sun, Dec 27, 2020 at 9:26 PM Justin Pryzby <[hidden email]> wrote:
On Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:
> 0001-pgindent-typos.not-a-patch touches pg_bsd_indent.

I'm hoping that someone will apply it there, but I realize that access to its
repository is tightly controlled :)

Not as much "tightly controlled" as "nobody's really bothered to grant any permissions".

I've applied the patch, thanks! While at it I fixed the indentation of the "target" row in the patch, I think you didn't take the fix all the way :)

You may also want to submit those fixes upstream in freebsd? The typos seem to be present at https://github.com/freebsd/freebsd/tree/master/usr.bin/indent as well. (If so, please include the updated version that I applied, so we don't diverge on that)

--
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
On Mon, Dec 28, 2020 at 11:42:03AM +0100, Magnus Hagander wrote:
> Not as much "tightly controlled" as "nobody's really bothered to grant any
> permissions".

Magnus, do I have an access to that?  This is the second time I am
crossing an issue with this issue, but I don't really know if I should
act on it or not :)
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Thomas Munro-5
In reply to this post by Michael Paquier-2
On Thu, Dec 24, 2020 at 9:12 PM Michael Paquier <[hidden email]> wrote:

> On Mon, Dec 21, 2020 at 10:11:53PM -0600, Justin Pryzby wrote:
> >          Specifies the amount of memory that should be allocated at server
> > -        startup time for use by parallel queries.  When this memory region is
> > +        startup for use by parallel queries.  When this memory region is
> >          insufficient or exhausted by concurrent queries, new parallel queries
> >          try to allocate extra shared memory temporarily from the operating
> >          system using the method configured with
> >          <varname>dynamic_shared_memory_type</varname>, which may be slower due
> >          to memory management overheads.  Memory that is allocated at startup
> > -        time with <varname>min_dynamic_shared_memory</varname> is affected by
> > +        with <varname>min_dynamic_shared_memory</varname> is affected by
> >          the <varname>huge_pages</varname> setting on operating systems where
> >          that is supported, and may be more likely to benefit from larger pages
> >          on operating systems where that is managed automatically.
>
> The current formulation is not that confusing, but I agree that this
> is an improvement.  Thomas, you are behind this one.  What do you
> think?

LGTM.


Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
On Tue, Dec 29, 2020 at 01:59:58PM +1300, Thomas Munro wrote:
> LGTM.

Thanks, I have done this one then.
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
In reply to this post by Justin Pryzby
On Sun, Dec 27, 2020 at 02:26:05PM -0600, Justin Pryzby wrote:
> I think a couple of these should be backpatched.
> doc/src/sgml/ref/pg_dump.sgml

This part can go down to 9.5.

> doc/src/sgml/sources.sgml

Yes, I have done an extra effort on those fixes where needed.  On top
of that, I have included catalogs.sgml, pgstatstatements.sgml,
explain.sgml, pg_verifybackup.sgml and wal.sgml in 13.

> doc/src/sgml/cube.sgml?
> doc/src/sgml/func.sgml?

These two are some beautification for the format of the function, so I
have left them out.
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Magnus Hagander-2
In reply to this post by Michael Paquier-2


On Tue, Dec 29, 2020 at 1:37 AM Michael Paquier <[hidden email]> wrote:
On Mon, Dec 28, 2020 at 11:42:03AM +0100, Magnus Hagander wrote:
> Not as much "tightly controlled" as "nobody's really bothered to grant any
> permissions".

Magnus, do I have an access to that?  This is the second time I am
crossing an issue with this issue, but I don't really know if I should
act on it or not :)

No, at this point it's just Tom (who has all the commits) and me (who set it up, and now has one commit). It's all manually handled.

--
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Tom Lane-2
Magnus Hagander <[hidden email]> writes:
> On Tue, Dec 29, 2020 at 1:37 AM Michael Paquier <[hidden email]> wrote:
>> Magnus, do I have an access to that?  This is the second time I am
>> crossing an issue with this issue, but I don't really know if I should
>> act on it or not :)

> No, at this point it's just Tom (who has all the commits) and me (who set
> it up, and now has one commit). It's all manually handled.

FTR, I have no objection to Michael (or any other PG committer) having
write access to that repo.  I think so far it's a matter of nobody's
bothered because there's so little need.

                        regards, tom lane


Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
In reply to this post by Michael Paquier-2
On Tue, Dec 29, 2020 at 06:22:43PM +0900, Michael Paquier wrote:
> Yes, I have done an extra effort on those fixes where needed.  On top
> of that, I have included catalogs.sgml, pgstatstatements.sgml,
> explain.sgml, pg_verifybackup.sgml and wal.sgml in 13.

Justin, I got to look at the libxml2 part, and finished by rewording
the comment block as follows:
+    * XmlTable returns a table-set of composite values.  This error context
+    * is used for providing more details, and needs to be reset between two
+    * internal calls of libxml2 as different error contexts might have been
+    * created or used.

What do you think?
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Justin Pryzby
On Sun, Jan 03, 2021 at 03:10:54PM +0900, Michael Paquier wrote:

> On Tue, Dec 29, 2020 at 06:22:43PM +0900, Michael Paquier wrote:
> > Yes, I have done an extra effort on those fixes where needed.  On top
> > of that, I have included catalogs.sgml, pgstatstatements.sgml,
> > explain.sgml, pg_verifybackup.sgml and wal.sgml in 13.
>
> Justin, I got to look at the libxml2 part, and finished by rewording
> the comment block as follows:
> +    * XmlTable returns a table-set of composite values.  This error context
> +    * is used for providing more details, and needs to be reset between two
> +    * internal calls of libxml2 as different error contexts might have been
> +    * created or used.

I don't like "this error context", since "this" seems to be referring to the
"tableset of composite values" as an err context.

I guess you mean: "needs to be reset between each internal call to libxml2.."

So I'd suggest:

> +    * XmlTable returns a table-set of composite values.  The error context
> +    * is used for providing additional detail. It needs to be reset between each
> +    * call to libxml2, since different error contexts might have been
> +    * created or used since it was last set.


But actually, maybe we should just use the comment that exists everywhere else
for that.

        /* Propagate context related error context to libxml2 */
        xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt, xml_errorHandler);

Maybe should elaborate and say:
        /*
         * Propagate context related error context to libxml2 (needs to be
         * reset before each call, in case other error contexts have been assigned since
         * it was first set) */
         */
        xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt, xml_errorHandler);

--
Justin


Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
On Sun, Jan 03, 2021 at 12:33:54AM -0600, Justin Pryzby wrote:
>
> But actually, maybe we should just use the comment that exists everywhere else
> for that.
>
>         /* Propagate context related error context to libxml2 */
>         xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt, xml_errorHandler);

I quite like your suggestion to be a maximum simple here, and the docs
of upstream also give a lot of context:
http://xmlsoft.org/html/libxml-xmlerror.html#xmlSetStructuredErrorFunc

So let's use this version and call it a day for this part.
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
On Sun, Jan 03, 2021 at 09:05:09PM +0900, Michael Paquier wrote:
> So let's use this version and call it a day for this part.

This has been done as of b49154b.
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Masahiko Sawada
On Wed, Jan 6, 2021 at 10:37 AM Michael Paquier <[hidden email]> wrote:
>
> On Sun, Jan 03, 2021 at 09:05:09PM +0900, Michael Paquier wrote:
> > So let's use this version and call it a day for this part.
>
> This has been done as of b49154b.

It seems to me that all work has been done. Can we mark this patch
entry as "Committed"? Or waiting for something on the author?

Regards,

--
Masahiko Sawada
EDB:  https://www.enterprisedb.com/


Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
On Fri, Jan 22, 2021 at 09:53:13PM +0900, Masahiko Sawada wrote:
> It seems to me that all work has been done. Can we mark this patch
> entry as "Committed"? Or waiting for something on the author?

Patch 0005 posted on [1], related to some docs of replication slots,
still needs a lookup.

[1]: https://www.postgresql.org/message-id/20201227202604.GC26311@...
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
In reply to this post by Justin Pryzby
Hi Justin,

On Sun, Dec 27, 2020 at 02:26:05PM -0600, Justin Pryzby wrote:
> Thank you.

I have been looking at 0005, the patch dealing with the docs of the
replication stats, and have some comments.

        <para>
         Number of times transactions were spilled to disk while decoding changes
-        from WAL for this slot. Transactions may get spilled repeatedly, and
-        this counter gets incremented on every such invocation.
+        from WAL for this slot. A given transaction may be spilled multiple times, and
+        this counter is incremented each time.
       </para></entry>
The original can be a bit hard to read, and I don't think that the new
formulation is an improvement.  I actually find confusing that this
mixes in the same sentence that a transaction can be spilled multiple
times and increment this counter each time.  What about splitting that
into two sentences?  Here is an idea:
"This counter is incremented each time a transaction is spilled.  The
same transaction may be spilled multiple times."

-        Number of transactions spilled to disk after the memory used by
-        logical decoding of changes from WAL for this slot exceeds
+        Number of transactions spilled to disk because the memory used by
+        logical decoding of changes from WAL for this slot exceeded
What does "logical decoding of changes from WAL" mean?  Here is an
idea to clarify all that:
"Number of transactions spilled to disk once the memory used by
logical decoding to decode changes from WAL has exceeded
logical_decoding_work_mem."

         Number of in-progress transactions streamed to the decoding output plugin
-        after the memory used by logical decoding of changes from WAL for this
-        slot exceeds <literal>logical_decoding_work_mem</literal>. Streaming only
+        because the memory used by logical decoding of changes from WAL for this
+        slot exceeded <literal>logical_decoding_work_mem</literal>. Streaming only
         works with toplevel transactions (subtransactions can't be streamed
-        independently), so the counter does not get incremented for subtransactions+        independently), so the counter is not incremented for subtransactions.
I have the same issue here with "by logical decoding of changes from
WAL".  I'd say "after the memory used by logical decoding to decode
changes from WAL for this slot has exceeded logical_decoding_work_mem".

         output plugin while decoding changes from WAL for this slot. Transactions
-        may get streamed repeatedly, and this counter gets incremented on every
-        such invocation.
+        may be streamed multiple times, and this counter is incremented each time.
I would split this stuff into two sentences:
"This counter is incremented each time a transaction is streamed.  The
same transaction may be streamed multiple times.

          Resets statistics to zero for a single replication slot, or for all
-         replication slots in the cluster.  The argument can be either the name
-         of the slot to reset the stats or NULL.  If the argument is NULL, all
-         counters shown in the <structname>pg_stat_replication_slots</structname>
-         view for all replication slots are reset.
+         replication slots in the cluster.  The argument can be either NULL or the name
+         of a slot for which stats are to be reset.  If the argument is NULL, all
+         counters in the <structname>pg_stat_replication_slots</structname>
+         view are reset for all replication slots.
Here also, I find rather confusing that this paragraph tells multiple
times that NULL resets the stats for all the replication slots.  NULL
should use a <literal> markup, and it is cleaner to use "statistics"
rather than "stats" IMO.  So I guess we could simplify things as
follows:
"Resets statistics of the replication slot defined by the argument. If
the argument is NULL, resets statistics for all the replication
slots."
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
On Sat, Jan 23, 2021 at 07:15:40PM +0900, Michael Paquier wrote:
> I have been looking at 0005, the patch dealing with the docs of the
> replication stats, and have some comments.

And attached is a patch to clarify all that.  I am letting that sleep
for a couple of days for now, so please let me know if you have any
comments.
--
Michael

replslot-docs.patch (5K) Download Attachment
signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Michael Paquier-2
On Wed, Jan 27, 2021 at 02:52:14PM +0900, Michael Paquier wrote:
> And attached is a patch to clarify all that.  I am letting that sleep
> for a couple of days for now, so please let me know if you have any
> comments.

I have spent some time on that, and applied this stuff as of 2a5862f
after some extra tweaks.  As there is nothing left, this CF entry is
now closed.
--
Michael

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: doc review for v14

Justin Pryzby
Another round of doc fixen.

wdiff to follow

commit 389c4ac2febe21fd48480a86819d94fd2eb9c1cc
Author: Justin Pryzby <[hidden email]>
Date:   Wed Feb 10 17:19:51 2021 -0600

    doc review for pg_stat_progress_create_index
   
    ab0dfc961b6a821f23d9c40c723d11380ce195a6
   
    should backpatch to v13

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index c602ee4427..16eb1d9e9c 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -5725,7 +5725,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
      </para>
      <para>
       When creating an index on a partitioned table, this column is set to
       the number of partitions on which the index has been [-completed.-]{+created.+}
      </para></entry>
     </row>
    </tbody>

commit bff6f0b557ff79365fc21d0ae261bad0fcb96539
Author: Justin Pryzby <[hidden email]>
Date:   Sat Feb 6 15:17:51 2021 -0600

    *an old and "deleted [has] happened"
   
    Heikki missed this in 6b387179baab8d0e5da6570678eefbe61f3acc79

diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 3763b4b995..a51f2c9920 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -6928,8 +6928,8 @@ Delete
</term>
<listitem>
<para>
                Identifies the following TupleData message as [-a-]{+an+} old tuple.
                This field is present if the table in which the delete[-has-]
                happened has REPLICA IDENTITY set to FULL.
</para>
</listitem>

commit 9bd601fa82ceeaf09573ce31eb3c081b4ae7a45d
Author: Justin Pryzby <[hidden email]>
Date:   Sat Jan 23 21:03:37 2021 -0600

    doc review for logical decoding of prepared xacts
   
    0aa8a01d04c8fe200b7a106878eebc3d0af9105c

diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index b854f2ccfc..71e9f36b8e 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -791,9 +791,9 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
     <para>
       The optional <function>filter_prepare_cb</function> callback
       is called to determine whether data that is part of the current
       two-phase commit transaction should be considered for [-decode-]{+decoding+}
       at this prepare stage or {+later+} as a regular one-phase transaction at
       <command>COMMIT PREPARED</command> [-time later.-]{+time.+} To signal that
       decoding should be skipped, return <literal>true</literal>;
       <literal>false</literal> otherwise. When the callback is not
       defined, <literal>false</literal> is assumed (i.e. nothing is
@@ -820,11 +820,11 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx
      The required <function>begin_prepare_cb</function> callback is called
      whenever the start of a prepared transaction has been decoded. The
      <parameter>gid</parameter> field, which is part of the
      <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback to
      check if the plugin has already received this [-prepare-]{+PREPARE+} in which case it
      can skip the remaining changes of the transaction. This can only happen
      if the user restarts the decoding after receiving the [-prepare-]{+PREPARE+} for a
      transaction but before receiving the [-commit prepared-]{+COMMIT PREPARED,+} say because of some
      error.
      <programlisting>
       typedef void (*LogicalDecodeBeginPrepareCB) (struct LogicalDecodingContext *ctx,
@@ -842,7 +842,7 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx
      decoded. The <function>change_cb</function> callback for all modified
      rows will have been called before this, if there have been any modified
      rows. The <parameter>gid</parameter> field, which is part of the
      <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback.
      <programlisting>
       typedef void (*LogicalDecodePrepareCB) (struct LogicalDecodingContext *ctx,
                                               ReorderBufferTXN *txn,
@@ -856,9 +856,9 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx

     <para>
      The required <function>commit_prepared_cb</function> callback is called
      whenever a transaction [-commit prepared-]{+COMMIT PREPARED+} has been decoded. The
      <parameter>gid</parameter> field, which is part of the
      <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback.
      <programlisting>
       typedef void (*LogicalDecodeCommitPreparedCB) (struct LogicalDecodingContext *ctx,
                                                      ReorderBufferTXN *txn,
@@ -872,15 +872,15 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx

     <para>
      The required <function>rollback_prepared_cb</function> callback is called
      whenever a transaction [-rollback prepared-]{+ROLLBACK PREPARED+} has been decoded. The
      <parameter>gid</parameter> field, which is part of the
      <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback. The
      parameters <parameter>prepare_end_lsn</parameter> and
      <parameter>prepare_time</parameter> can be used to check if the plugin
      has received this [-prepare transaction-]{+PREPARE TRANSACTION+} in which case it can apply the
      rollback, otherwise, it can skip the rollback operation. The
      <parameter>gid</parameter> alone is not sufficient because the downstream
      node can have {+a+} prepared transaction with same identifier.
      <programlisting>
       typedef void (*LogicalDecodeRollbackPreparedCB) (struct LogicalDecodingContext *ctx,
                                                        ReorderBufferTXN *txn,
@@ -1122,7 +1122,7 @@ OutputPluginWrite(ctx, true);
    the <function>stream_commit_cb</function> callback
    (or possibly aborted using the <function>stream_abort_cb</function> callback).
    If two-phase commits are supported, the transaction can be prepared using the
    <function>stream_prepare_cb</function> callback, [-commit prepared-]{+COMMIT PREPARED+} using the
    <function>commit_prepared_cb</function> callback or aborted using the
    <function>rollback_prepared_cb</function>.
   </para>

commit 7ddf562c7b384b4a802111ac1b0eab3698982c8e
Author: Justin Pryzby <[hidden email]>
Date:   Sat Jan 23 21:02:47 2021 -0600

    doc review for multiranges
   
    6df7a9698bb036610c1e8c6d375e1be38cb26d5f

diff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml
index 6e3d82b85b..ec95b4eb01 100644
--- a/doc/src/sgml/extend.sgml
+++ b/doc/src/sgml/extend.sgml
@@ -448,7 +448,7 @@
     of <type>anycompatible</type> and <type>anycompatiblenonarray</type>
     inputs, the array element types of <type>anycompatiblearray</type>
     inputs, the range subtypes of <type>anycompatiblerange</type> inputs,
     and the multirange subtypes of [-<type>anycompatiablemultirange</type>-]{+<type>anycompatiblemultirange</type>+}
     inputs.  If <type>anycompatiblenonarray</type> is present then the
     common type is required to be a non-array type.  Once a common type is
     identified, arguments in <type>anycompatible</type>

commit 4fa1fd9769c93dbec71fa92097ebfea5f420bb09
Author: Justin Pryzby <[hidden email]>
Date:   Sat Jan 23 20:33:10 2021 -0600

    doc review: logical decode in prepare
   
    a271a1b50e9bec07e2ef3a05e38e7285113e4ce6

diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index cf705ed9cd..b854f2ccfc 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -1214,7 +1214,7 @@ stream_commit_cb(...);  &lt;-- commit of the streamed transaction
   </para>

   <para>
    When a prepared transaction is [-rollbacked-]{+rolled back+} using the
    <command>ROLLBACK PREPARED</command>, then the
    <function>rollback_prepared_cb</function> callback is invoked and when the
    prepared transaction is committed using <command>COMMIT PREPARED</command>,

commit d27a74968b61354ad1186a4740063dd4ac0b1bea
Author: Justin Pryzby <[hidden email]>
Date:   Sat Jan 23 17:17:58 2021 -0600

    doc review for FDW bulk inserts
   
    b663a4136331de6c7364226e3dbf7c88bfee7145

diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml
index 854913ae5f..12e00bfc2f 100644
--- a/doc/src/sgml/fdwhandler.sgml
+++ b/doc/src/sgml/fdwhandler.sgml
@@ -672,9 +672,8 @@ GetForeignModifyBatchSize(ResultRelInfo *rinfo);

     Report the maximum number of tuples that a single
     <function>ExecForeignBatchInsert</function> call can handle for
     the specified foreign table.[-That is,-]  The executor passes at most
     the {+given+} number of tuples[-that this function returns-] to <function>ExecForeignBatchInsert</function>.
     <literal>rinfo</literal> is the <structname>ResultRelInfo</structname> struct describing
     the target foreign table.
     The FDW is expected to provide a foreign server and/or foreign

commit 2b8fdcc91562045b6b2cec0e69a724e078cfbdb5
Author: Justin Pryzby <[hidden email]>
Date:   Wed Feb 3 00:51:25 2021 -0600

    doc review: piecemeal construction of partitioned indexes
   
    5efd604ec0a3bdde98fe19d8cada69ab4ef80db3
   
    backpatch to v11

diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml
index 1e9a4625cc..a8cbd45d35 100644
--- a/doc/src/sgml/ddl.sgml
+++ b/doc/src/sgml/ddl.sgml
@@ -3962,8 +3962,8 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02
     As explained above, it is possible to create indexes on partitioned tables
     so that they are applied automatically to the entire hierarchy.
     This is very
     convenient, as not only[-will-] the existing partitions [-become-]{+will be+} indexed, but
     [-also-]{+so will+} any partitions that are created in the [-future will.-]{+future.+}  One limitation is
     that it's not possible to use the <literal>CONCURRENTLY</literal>
     qualifier when creating such a partitioned index.  To avoid long lock
     times, it is possible to use <command>CREATE INDEX ON ONLY</command>

commit 2f6d8a4d0157b632ad1e0ff3b0a54c4d38199637
Author: Justin Pryzby <[hidden email]>
Date:   Sat Jan 30 18:10:21 2021 -0600

    duplicate words
   
    commit 9c4f5192f69ed16c99e0d079f0b5faebd7bad212
        Allow pg_rewind to use a standby server as the source system.
   
    commit 4a252996d5fda7662b2afdf329a5c95be0fe3b01
        Add tests for tuplesort.c.
   
    commit 0a2bc5d61e713e3fe72438f020eea5fcc90b0f0b
        Move per-agg and per-trans duplicate finding to the planner.
   
    commit 623a9ba79bbdd11c5eccb30b8bd5c446130e521c
        snapshot scalability: cache snapshots using a xact completion counter.
   
    commit 2c03216d831160bedd72d45f712601b6f7d03f1c
        Revamp the WAL record format.

diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index e723253297..25d6df1659 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -433,8 +433,7 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,
 * NB: A redo function should normally not call this directly. To get a page
 * to modify, use XLogReadBufferForRedoExtended instead. It is important that
 * all pages modified by a WAL record are registered in the WAL records, or
 * they will be invisible to tools that[-that-] need to know which pages are[-*-] modified.
 */
Buffer
XLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,
diff --git a/src/backend/optimizer/prep/prepagg.c b/src/backend/optimizer/prep/prepagg.c
index 929a8ea13b..89046f9afb 100644
--- a/src/backend/optimizer/prep/prepagg.c
+++ b/src/backend/optimizer/prep/prepagg.c
@@ -71,7 +71,7 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 *
 * Information about the aggregates and transition functions are collected
 * in the root->agginfos and root->aggtransinfos lists.  The 'aggtranstype',
 * 'aggno', and 'aggtransno' fields [-in-]{+of each Aggref+} are filled [-in in each Aggref.-]{+in.+}
 *
 * NOTE: This modifies the Aggrefs in the input expression in-place!
 *
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
index cf12eda504..b9fbdcb88f 100644
--- a/src/backend/storage/ipc/procarray.c
+++ b/src/backend/storage/ipc/procarray.c
@@ -2049,7 +2049,7 @@ GetSnapshotDataReuse(Snapshot snapshot)
         * holding ProcArrayLock) exclusively). Thus the xactCompletionCount check
         * ensures we would detect if the snapshot would have changed.
         *
         * As the snapshot contents are the same as it was before, it is[-is-] safe
         * to re-enter the snapshot's xmin into the PGPROC array. None of the rows
         * visible under the snapshot could already have been removed (that'd
         * require the set of running transactions to change) and it fulfills the
diff --git a/src/bin/pg_rewind/libpq_source.c b/src/bin/pg_rewind/libpq_source.c
index 86d2adcaee..ac794cf4eb 100644
--- a/src/bin/pg_rewind/libpq_source.c
+++ b/src/bin/pg_rewind/libpq_source.c
@@ -539,7 +539,7 @@ process_queued_fetch_requests(libpq_source *src)
                                                 chunkoff, rq->path, (int64) rq->offset);

                        /*
                         * We should not receive[-receive-] more data than we requested, or
                         * pg_read_binary_file() messed up.  We could receive less,
                         * though, if the file was truncated in the source after we
                         * checked its size. That's OK, there should be a WAL record of
diff --git a/src/test/regress/expected/tuplesort.out b/src/test/regress/expected/tuplesort.out
index 3fc1998bf2..418f296a3f 100644
--- a/src/test/regress/expected/tuplesort.out
+++ b/src/test/regress/expected/tuplesort.out
@@ -1,7 +1,7 @@
-- only use parallelism when explicitly intending to do so
SET max_parallel_maintenance_workers = 0;
SET max_parallel_workers = 0;
-- A table with[-with-] contents that, when sorted, triggers abbreviated
-- key aborts. One easy way to achieve that is to use uuids that all
-- have the same prefix, as abbreviated keys for uuids just use the
-- first sizeof(Datum) bytes.
diff --git a/src/test/regress/sql/tuplesort.sql b/src/test/regress/sql/tuplesort.sql
index 7d7e02f02a..846484d561 100644
--- a/src/test/regress/sql/tuplesort.sql
+++ b/src/test/regress/sql/tuplesort.sql
@@ -2,7 +2,7 @@
SET max_parallel_maintenance_workers = 0;
SET max_parallel_workers = 0;

-- A table with[-with-] contents that, when sorted, triggers abbreviated
-- key aborts. One easy way to achieve that is to use uuids that all
-- have the same prefix, as abbreviated keys for uuids just use the
-- first sizeof(Datum) bytes.

commit 4920f9520d7ba1b420bcf03ae48178d74425a622
Author: Justin Pryzby <[hidden email]>
Date:   Sun Jan 17 10:57:21 2021 -0600

    doc review for checksum docs
   
    cf621d9d84db1e6edaff8ffa26bad93fdce5f830

diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 66de1ee2f8..02f576a1a9 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -237,19 +237,19 @@
  </indexterm>

  <para>
   [-Data-]{+By default, data+} pages are not[-checksum-] protected by [-default,-]{+checksums,+} but this can optionally be
   enabled for a cluster.  When enabled, each data page will be [-assigned-]{+ASSIGNED+} a
   checksum that is updated when the page is written and verified [-every-]{+each+} time
   the page is read. Only data pages are protected by [-checksums,-]{+checksums;+} internal data
   structures and temporary files are not.
  </para>

  <para>
   Checksums [-are-]{+verification is+} normally [-enabled-]{+ENABLED+} when the cluster is initialized using <link
   linkend="app-initdb-data-checksums"><application>initdb</application></link>.
   They can also be enabled or disabled at a later time as an offline
   operation. Data checksums are enabled or disabled at the full cluster
   level, and cannot be specified[-individually-] for {+individual+} databases or tables.
  </para>

  <para>
@@ -260,9 +260,9 @@
  </para>

  <para>
   When attempting to recover from corrupt [-data-]{+data,+} it may be necessary to bypass
   the checksum [-protection in order to recover data.-]{+protection.+} To do this, temporarily set the configuration
   parameter <xref linkend="guc-ignore-checksum-failure" />.
  </para>

  <sect2 id="checksums-offline-enable-disable">

commit fc69321a5ebc55cb1df9648bc28215672cffbf31
Author: Justin Pryzby <[hidden email]>
Date:   Wed Jan 20 16:10:49 2021 -0600

    Doc review for psql \dX
   
    ad600bba0422dde4b73fbd61049ff2a3847b068a

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 13c1edfa4d..d0f397d5ea 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1930,8 +1930,9 @@ testdb=&gt;
        </para>

        <para>
        The [-column-]{+status+} of [-the-]{+each+} kind of extended [-stats-]{+statistics is shown in a column+}
{+        named after the "kind"+} (e.g. [-Ndistinct) shows its status.-]{+Ndistinct).+}
        NULL means that it doesn't [-exists.-]{+exist.+} "defined" means that it was requested
        when creating the statistics.
        You can use pg_stats_ext if you'd like to know whether <link linkend="sql-analyze">
        <command>ANALYZE</command></link> was run and statistics are available to the

commit 78035a725e13e28bbae9e62fe7013bef435d70e3
Author: Justin Pryzby <[hidden email]>
Date:   Sat Feb 6 15:13:37 2021 -0600

    *an exclusive
   
    3c84046490bed3c22e0873dc6ba492e02b8b9051

diff --git a/doc/src/sgml/ref/drop_index.sgml b/doc/src/sgml/ref/drop_index.sgml
index 85cf23bca2..b6d2c2014f 100644
--- a/doc/src/sgml/ref/drop_index.sgml
+++ b/doc/src/sgml/ref/drop_index.sgml
@@ -45,7 +45,7 @@ DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] <replaceable class="parameter">name</r
     <para>
      Drop the index without locking out concurrent selects, inserts, updates,
      and deletes on the index's table.  A normal <command>DROP INDEX</command>
      acquires {+an+} exclusive lock on the table, blocking other accesses until the
      index drop can be completed.  With this option, the command instead
      waits until conflicting transactions have completed.
     </para>

commit c36ac4c1f85f620ae9ce9cfa7c14b6c95dcdedc5
Author: Justin Pryzby <[hidden email]>
Date:   Wed Dec 30 09:39:16 2020 -0600

    function comment: get_am_name

diff --git a/src/backend/commands/amcmds.c b/src/backend/commands/amcmds.c
index eff9535ed0..188109e474 100644
--- a/src/backend/commands/amcmds.c
+++ b/src/backend/commands/amcmds.c
@@ -186,7 +186,7 @@ get_am_oid(const char *amname, bool missing_ok)
}

/*
 * get_am_name - given an access method [-OID name and type,-]{+OID,+} look up its name.
 */
char *
get_am_name(Oid amOid)

commit 22e6f0e2d4eaf78e449393bf2bf8b3f8af2b71f8
Author: Justin Pryzby <[hidden email]>
Date:   Mon Jan 18 14:37:17 2021 -0600

    One fewer (not one less)

diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
index 9abcee32af..f6760eb31e 100644
--- a/contrib/pageinspect/heapfuncs.c
+++ b/contrib/pageinspect/heapfuncs.c
@@ -338,7 +338,7 @@ tuple_data_split_internal(Oid relid, char *tupdata,
                attr = TupleDescAttr(tupdesc, i);

                /*
                 * Tuple header can specify [-less-]{+fewer+} attributes than tuple descriptor as
                 * ALTER TABLE ADD COLUMN without DEFAULT keyword does not actually
                 * change tuples in pages, so attributes with numbers greater than
                 * (t_infomask2 & HEAP_NATTS_MASK) should be treated as NULL.
diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml
index cebc09ef91..1b00e543a6 100644
--- a/doc/src/sgml/charset.sgml
+++ b/doc/src/sgml/charset.sgml
@@ -619,7 +619,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR";
    name such as <literal>de_DE</literal> can be considered unique
    within a given database even though it would not be unique globally.
    Use of the stripped collation names is recommended, since it will
    make one [-less-]{+fewer+} thing you need to change if you decide to change to
    another database encoding.  Note however that the <literal>default</literal>,
    <literal>C</literal>, and <literal>POSIX</literal> collations can be used regardless of
    the database encoding.
diff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml
index 0b24a55505..693423e524 100644
--- a/doc/src/sgml/ref/create_type.sgml
+++ b/doc/src/sgml/ref/create_type.sgml
@@ -867,7 +867,7 @@ CREATE TYPE <replaceable class="parameter">name</replaceable>
   Before <productname>PostgreSQL</productname> version 8.3, the name of
   a generated array type was always exactly the element type's name with one
   underscore character (<literal>_</literal>) prepended.  (Type names were
   therefore restricted in length to one [-less-]{+fewer+} character than other names.)
   While this is still usually the case, the array type name may vary from
   this in case of maximum-length names or collisions with user type names
   that begin with underscore.  Writing code that depends on this convention
diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml
index e81addcfa9..aa172d102b 100644
--- a/doc/src/sgml/rules.sgml
+++ b/doc/src/sgml/rules.sgml
@@ -1266,7 +1266,7 @@ CREATE [ OR REPLACE ] RULE <replaceable class="parameter">name</replaceable> AS
<para>
    The query trees generated from rule actions are thrown into the
    rewrite system again, and maybe more rules get applied resulting
    in [-more-]{+additional+} or [-less-]{+fewer+} query trees.
    So a rule's actions must have either a different
    command type or a different result relation than the rule itself is
    on, otherwise this recursive process will end up in an infinite loop.
diff --git a/src/backend/access/common/heaptuple.c b/src/backend/access/common/heaptuple.c
index 24a27e387d..0b56b0fa5a 100644
--- a/src/backend/access/common/heaptuple.c
+++ b/src/backend/access/common/heaptuple.c
@@ -719,11 +719,11 @@ heap_copytuple_with_tuple(HeapTuple src, HeapTuple dest)
}

/*
 * Expand a tuple which has [-less-]{+fewer+} attributes than required. For each attribute
 * not present in the sourceTuple, if there is a missing value that will be
 * used. Otherwise the attribute will be set to NULL.
 *
 * The source tuple must have [-less-]{+fewer+} attributes than the required number.
 *
 * Only one of targetHeapTuple and targetMinimalTuple may be supplied. The
 * other argument must be NULL.
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 7295cf0215..64908ac39c 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1003,7 +1003,7 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
 * As of May 2004 we use a new two-stage method:  Stage one selects up
 * to targrows random blocks (or all blocks, if there aren't so many).
 * Stage two scans these blocks and uses the Vitter algorithm to create
 * a random sample of targrows rows (or [-less,-]{+fewer,+} if there are [-less-]{+fewer+} in the
 * sample of blocks).  The two stages are executed simultaneously: each
 * block is processed as soon as stage one returns its number and while
 * the rows are read stage two controls which ones are to be inserted
diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index 4d185c27b4..078aaef539 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -263,7 +263,7 @@ static int compareDatetime(Datum val1, Oid typid1, Datum val2, Oid typid2,
 * implement @? and @@ operators, which in turn are intended to have an
 * index support.  Thus, it's desirable to make it easier to achieve
 * consistency between index scan results and sequential scan results.
 * So, we throw as [-less-]{+few+} errors as possible.  Regarding this function,
 * such behavior also matches behavior of JSON_EXISTS() clause of
 * SQL/JSON.  Regarding jsonb_path_match(), this function doesn't have
 * an analogy in SQL/JSON, so we define its behavior on our own.
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 47ca4ddbb5..52314d3aa1 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -645,7 +645,7 @@ scalarineqsel(PlannerInfo *root, Oid operator, bool isgt, bool iseq,

                        /*
                         * The calculation so far gave us a selectivity for the "<=" case.
                         * We'll have one [-less-]{+fewer+} tuple for "<" and one additional tuple for
                         * ">=", the latter of which we'll reverse the selectivity for
                         * below, so we can simply subtract one tuple for both cases.  The
                         * cases that need this adjustment can be identified by iseq being
diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c
index fa2b49c676..55c9445898 100644
--- a/src/backend/utils/cache/catcache.c
+++ b/src/backend/utils/cache/catcache.c
@@ -1497,7 +1497,7 @@ GetCatCacheHashValue(CatCache *cache,
 * It doesn't make any sense to specify all of the cache's key columns
 * here: since the key is unique, there could be at most one match, so
 * you ought to use SearchCatCache() instead.  Hence this function takes
 * one [-less-]{+fewer+} Datum argument than SearchCatCache() does.
 *
 * The caller must not modify the list object or the pointed-to tuples,
 * and must call ReleaseCatCacheList() when done with the list.
diff --git a/src/backend/utils/misc/sampling.c b/src/backend/utils/misc/sampling.c
index 0c327e823f..7348b86682 100644
--- a/src/backend/utils/misc/sampling.c
+++ b/src/backend/utils/misc/sampling.c
@@ -42,7 +42,7 @@ BlockSampler_Init(BlockSampler bs, BlockNumber nblocks, int samplesize,
        bs->N = nblocks; /* measured table size */

        /*
         * If we decide to reduce samplesize for tables that have [-less-]{+fewer+} or not much
         * more than samplesize blocks, here is the place to do it.
         */
        bs->n = samplesize;
diff --git a/src/backend/utils/mmgr/freepage.c b/src/backend/utils/mmgr/freepage.c
index e4ee1aab97..10a1effb74 100644
--- a/src/backend/utils/mmgr/freepage.c
+++ b/src/backend/utils/mmgr/freepage.c
@@ -495,7 +495,7 @@ FreePageManagerDump(FreePageManager *fpm)
 * if we search the parent page for the first key greater than or equal to
 * the first key on the current page, the downlink to this page will be either
 * the exact index returned by the search (if the first key decreased)
 * or one [-less-]{+fewer+} (if the first key increased).
 */
static void
FreePageBtreeAdjustAncestorKeys(FreePageManager *fpm, FreePageBtree *btp)
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index a4a3f40048..627a244fb7 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -6458,7 +6458,7 @@ threadRun(void *arg)

                        /*
                         * If advanceConnectionState changed client to finished state,
                         * that's one [-less-]{+fewer+} client that remains.
                         */
                        if (st->state == CSTATE_FINISHED || st->state == CSTATE_ABORTED)
                                remains--;
diff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h
index d27c8601fa..e3d2e751ea 100644
--- a/src/include/pg_config_manual.h
+++ b/src/include/pg_config_manual.h
@@ -21,7 +21,7 @@

/*
 * Maximum length for identifiers (e.g. table names, column names,
 * function names).  Names actually are limited to one [-less-]{+fewer+} byte than this,
 * because the length must include a trailing zero byte.
 *
 * Changing this requires an initdb.
@@ -87,7 +87,7 @@

/*
 * MAXPGPATH: standard size of a pathname buffer in PostgreSQL (hence,
 * maximum usable pathname length is one [-less).-]{+fewer).+}
 *
 * We'd use a standard system header symbol for this, if there weren't
 * so many to choose from: MAXPATHLEN, MAX_PATH, PATH_MAX are all
diff --git a/src/interfaces/ecpg/include/sqlda-native.h b/src/interfaces/ecpg/include/sqlda-native.h
index 67d3c7b4e4..9e73f1f1b1 100644
--- a/src/interfaces/ecpg/include/sqlda-native.h
+++ b/src/interfaces/ecpg/include/sqlda-native.h
@@ -7,7 +7,7 @@

/*
 * Maximum length for identifiers (e.g. table names, column names,
 * function names).  Names actually are limited to one [-less-]{+fewer+} byte than this,
 * because the length must include a trailing zero byte.
 *
 * This should be at least as much as NAMEDATALEN of the database the
diff --git a/src/test/regress/expected/geometry.out b/src/test/regress/expected/geometry.out
index 84f7eabb66..9799cfbdbd 100644
--- a/src/test/regress/expected/geometry.out
+++ b/src/test/regress/expected/geometry.out
@@ -4325,7 +4325,7 @@ SELECT f1, polygon(8, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';
 <(100,1),115>  | ((-15,1),(18.6827201635,82.3172798365),(100,116),(181.317279836,82.3172798365),(215,1),(181.317279836,-80.3172798365),(100,-114),(18.6827201635,-80.3172798365))
(6 rows)

-- Too [-less-]{+few+} points error
SELECT f1, polygon(1, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';
ERROR:  must request at least 2 points
-- Zero radius error
diff --git a/src/test/regress/sql/geometry.sql b/src/test/regress/sql/geometry.sql
index 96df0ab05a..b0ab6d03ec 100644
--- a/src/test/regress/sql/geometry.sql
+++ b/src/test/regress/sql/geometry.sql
@@ -424,7 +424,7 @@ SELECT f1, f1::polygon FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';
-- To polygon with less points
SELECT f1, polygon(8, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';

-- Too [-less-]{+few+} points error
SELECT f1, polygon(1, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';

-- Zero radius error

commit 1c00249319faf6dc23aadf4568ead5adc65ff57f
Author: Justin Pryzby <[hidden email]>
Date:   Wed Feb 10 17:45:07 2021 -0600

    comment typos

diff --git a/src/include/lib/simplehash.h b/src/include/lib/simplehash.h
index 395be1ca9a..99a03c8f21 100644
--- a/src/include/lib/simplehash.h
+++ b/src/include/lib/simplehash.h
@@ -626,7 +626,7 @@ restart:
                uint32 curoptimal;
                SH_ELEMENT_TYPE *entry = &data[curelem];

                /* any empty bucket can[-directly-] be used {+directly+} */
                if (entry->status == SH_STATUS_EMPTY)
                {
                        tb->members++;

commit 2ac95b66e30785d480ef04c11d12b1075548045e
Author: Justin Pryzby <[hidden email]>
Date:   Sat Nov 14 23:09:21 2020 -0600

    typos in master

diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml
index 7c341c8e3f..fe88c2273a 100644
--- a/doc/src/sgml/datatype.sgml
+++ b/doc/src/sgml/datatype.sgml
@@ -639,7 +639,7 @@ NUMERIC

    <para>
     The <literal>NaN</literal> (not a number) value is used to represent
     undefined [-calculational-]{+computational+} results.  In general, any operation with
     a <literal>NaN</literal> input yields another <literal>NaN</literal>.
     The only exception is when the operation's other inputs are such that
     the same output would be obtained if the <literal>NaN</literal> were to

commit d6d3499f52e664b7da88a3f2c94701cae6d76609
Author: Justin Pryzby <[hidden email]>
Date:   Sat Dec 5 22:43:12 2020 -0600

    pg_restore: "must be specified" and --list
   
    This was discussed here, but the idea got lost.
    https://www.postgresql.org/message-id/flat/20190612170201.GA11881%40alvherre.pgsql#2984347ab074e6f198bd294fa41884df

diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 589b4aed53..f6e6e41329 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -305,7 +305,7 @@ main(int argc, char **argv)
        /* Complain if neither -f nor -d was specified (except if dumping TOC) */
        if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
        {
                pg_log_error("one of [--d/--dbname and -f/--file-]{+-d/--dbname, -f/--file, or -l/--list+} must be specified");
                exit_nicely(1);
        }

diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
index 083fb3ad08..8280914c2a 100644
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -63,8 +63,8 @@ command_fails_like(

command_fails_like(
        ['pg_restore'],
        qr{\Qpg_restore: error: one of [--d/--dbname and -f/--file-]{+-d/--dbname, -f/--file, or -l/--list+} must be specified\E},
        'pg_restore: error: one of [--d/--dbname and -f/--file-]{+-d/--dbname, -f/--file, or -l/--list+} must be specified');

command_fails_like(
        [ 'pg_restore', '-s', '-a', '-f -' ],

commit 7c2dee70b0450bac5cfa2c3db52b4a2b2e535a9e
Author: Justin Pryzby <[hidden email]>
Date:   Sat Feb 15 15:53:34 2020 -0600

    Update comment obsolete since 69c3936a

diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 601b6dab03..394b4e667b 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -2064,8 +2064,7 @@ initialize_hash_entry(AggState *aggstate, TupleHashTable hashtable,
}

/*
 * Look up hash entries for the current tuple in all hashed grouping [-sets,-]
[- * returning an array of pergroup pointers suitable for advance_aggregates.-]{+sets.+}
 *
 * Be aware that lookup_hash_entry can reset the tmpcontext.
 *

commit 4b81f9512395cb321730e0a3dba1c659b9c2fee3
Author: Justin Pryzby <[hidden email]>
Date:   Fri Jan 8 13:09:55 2021 -0600

    doc: pageinspect
   
    d6061f83a166b015657fda8623c704fcb86930e9
   
    backpatch to 9.6?

diff --git a/doc/src/sgml/pageinspect.sgml b/doc/src/sgml/pageinspect.sgml
index a0be779940..a7bce41b7c 100644
--- a/doc/src/sgml/pageinspect.sgml
+++ b/doc/src/sgml/pageinspect.sgml
@@ -211,7 +211,7 @@ test=# SELECT tuple_data_split('pg_class'::regclass, t_data, t_infomask, t_infom
     </para>
     <para>
      If <parameter>do_detoast</parameter> is <literal>true</literal>,
      [-attribute that-]{+attributes+} will be detoasted as needed. Default value is
      <literal>false</literal>.
     </para>
    </listitem>

0001-doc-pageinspect.patch (962 bytes) Download Attachment
0002-Update-comment-obsolete-since-69c3936a.patch (932 bytes) Download Attachment
0003-pg_restore-must-be-specified-and-list.patch (1K) Download Attachment
0004-typos-in-master.patch (977 bytes) Download Attachment
0005-comment-typos.patch (753 bytes) Download Attachment
0006-One-fewer-not-one-less.patch (12K) Download Attachment
0007-function-comment-get_am_name.patch (773 bytes) Download Attachment
0008-an-exclusive.patch (1K) Download Attachment
0009-Doc-review-for-psql-dX.patch (1K) Download Attachment
0010-doc-review-for-checksum-docs.patch (2K) Download Attachment
0011-duplicate-words.patch (5K) Download Attachment
0012-doc-review-piecemeal-construction-of-partitioned-ind.patch (1K) Download Attachment
0013-doc-review-for-FDW-bulk-inserts.patch (1K) Download Attachment
0014-doc-review-logical-decode-in-prepare.patch (1K) Download Attachment
0015-doc-review-for-multiranges.patch (1K) Download Attachment
0016-doc-review-for-logical-decoding-of-prepared-xacts.patch (5K) Download Attachment
0017-an-old-and-deleted-has-happened.patch (1019 bytes) Download Attachment
0018-doc-review-for-pg_stat_progress_create_index.patch (968 bytes) Download Attachment
12