[PATCH] Speedup truncates of relation forks

classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

[PATCH] Speedup truncates of relation forks

Jamison, Kirk

Hi all,

 

Attached is a patch to speed up the performance of truncates of relations.

This is also my first time to contribute my own patch,

and I'd gladly appreciate your feedback and advice.

 

A.     Summary

 

Whenever we truncate relations, it scans the shared buffers thrice

(one per fork) which can be time-consuming. This patch improves

the performance of relation truncates by initially marking the

pages-to-be-truncated of relation forks, then simultaneously

truncating them, resulting to an improved performance in VACUUM,

autovacuum operations and their recovery performance.

 

B.     Patch Details

The following functions were modified:

 

1.      FreeSpaceMapTruncateRel() and visibilitymap_truncate()

a.      CURRENT HEAD: These functions truncate the FSM pages and unused VM pages.

b.      PATCH: Both functions only mark the pages to truncate and return a block number.

-        We used to call smgrtruncate() in these functions, but these are now moved inside the RelationTruncate() and smgr_redo().

-        The tentative renaming of the functions are: MarkFreeSpaceMapTruncateRel() and visibilitymap_mark_truncate(). Feel free to suggest better names.

 

2.      RelationTruncate()

a.      HEAD: Truncate FSM and VM first, then write WAL, and lastly truncate main fork.

b.      PATCH: Now we mark FSM and VM pages first, write WAL, mark MAIN fork pages, then truncate all forks (MAIN, FSM, VM) simultaneously.

 

3.      smgr_redo()

a.      HEAD: Truncate main fork and the relation during XLOG replay, create fake rel cache for FSM and VM, truncate FSM, truncate VM, then free fake rel cache.

b.      PATCH: Mark main fork dirty buffers, create fake rel cache, mark fsm and vm buffers, truncate marked pages of relation forks simultaneously, truncate relation during XLOG replay, then free fake rel cache.

 

4.      smgrtruncate(), DropRelFileNodeBuffers()

-        input arguments are changed to array of forknum and block numbers, int nforks (size of forkNum array)

-        truncates the pages of relation forks simultaneously

 

5.      smgrdounlinkfork()

I modified the function because it calls DropRelFileNodeBuffers. However, this is a dead code that can be removed.

I did not remove it for now because that's not for me but the community to decide.

 

C.     Performance Test

 

I setup a synchronous streaming replication between a master-standby.

 

In postgresql.conf:

autovacuum = off

wal_level = replica

max_wal_senders = 5

wal_keep_segments = 16

max_locks_per_transaction = 10000

#shared_buffers = 8GB

#shared_buffers = 24GB

 

Objective: Measure VACUUM execution time; varying shared_buffers size.

 

1. Create table (ex. 10,000 tables). Insert data to tables.

2. DELETE FROM TABLE (ex. all rows of 10,000 tables)

3. psql -c "\timing on" (measures total execution of SQL queries)

4. VACUUM (whole db)

 

If you want to test with large number of relations,

you may use the stored functions I used here:

http://bit.ly/reltruncates

 

D.     Results

 

HEAD results

1) 128MB shared_buffers = 48.885 seconds

2) 8GB shared_buffers = 5 min 30.695 s

3) 24GB shared_buffers = 14 min 13.598 s

 

PATCH results

1) 128MB shared_buffers = 42.736 s

2) 8GB shared_buffers = 2 min 26.464 s

3) 24GB shared_buffers = 5 min 35.848 s

 

The performance significantly improved compared to HEAD,

especially for large shared buffers.

 

---

Would appreciate to hear your thoughts, comments, advice.

Thank you in advance.

 

 

Regards,

Kirk Jamison


v1-0001-Speedup-truncate-of-relation-forks.patch (30K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Speedup truncates of relation forks

Adrien Nayrat-2
On 6/11/19 9:34 AM, Jamison, Kirk wrote:
> Hi all,
>
> Attached is a patch to speed up the performance of truncates of relations.
>

Thanks for working on this!

>
> *C.     **Performance Test*
>
> I setup a synchronous streaming replication between a master-standby.
>
> In postgresql.conf:
> autovacuum = off
> wal_level = replica
> max_wal_senders = 5
> wal_keep_segments = 16
> max_locks_per_transaction = 10000
> #shared_buffers = 8GB
> #shared_buffers = 24GB
>
> Objective: Measure VACUUM execution time; varying shared_buffers size.
>
> 1. Create table (ex. 10,000 tables). Insert data to tables.
> 2. DELETE FROM TABLE (ex. all rows of 10,000 tables)
> 3. psql -c "\timing on" (measures total execution of SQL queries)
> 4. VACUUM (whole db)
>
> If you want to test with large number of relations,
>
> you may use the stored functions I used here:
> http://bit.ly/reltruncates
You should post these functions in this thread for the archives ;)

>
> *D.     **Results*
>
> HEAD results
>
> 1) 128MB shared_buffers = 48.885 seconds
> 2) 8GB shared_buffers = 5 min 30.695 s
> 3) 24GB shared_buffers = 14 min 13.598 s
>
> PATCH results
>
> 1) 128MB shared_buffers = 42.736 s
> 2) 8GB shared_buffers = 2 min 26.464 s
> 3) 24GB shared_buffers = 5 min 35.848 s
>
> The performance significantly improved compared to HEAD,
> especially for large shared buffers.
>
From a user POW, the main issue with relation truncation is that it can block
queries on standby server during truncation replay.

It could be interesting if you can test this case and give results of your path.
Maybe by performing read queries on standby server and counting wait_event with
pg_wait_sampling?

Regards,

--
Adrien



signature.asc (499 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Speedup truncates of relation forks

Tomas Vondra-4
In reply to this post by Jamison, Kirk
On Tue, Jun 11, 2019 at 07:34:35AM +0000, Jamison, Kirk wrote:
>Hi all,
>
>Attached is a patch to speed up the performance of truncates of relations.
>This is also my first time to contribute my own patch,
>and I'd gladly appreciate your feedback and advice.
>

Thanks for the patch. Please add it to the commitfest app, so that we
don't forget about it: https://commitfest.postgresql.org/23/

>
>A.     Summary
>
>Whenever we truncate relations, it scans the shared buffers thrice
>(one per fork) which can be time-consuming. This patch improves
>the performance of relation truncates by initially marking the
>pages-to-be-truncated of relation forks, then simultaneously
>truncating them, resulting to an improved performance in VACUUM,
>autovacuum operations and their recovery performance.
>

OK, so essentially the whole point is to scan the buffers only once, for
all forks at the same time (instead of three times).

>
>B.     Patch Details
>The following functions were modified:
>
>
>1.      FreeSpaceMapTruncateRel() and visibilitymap_truncate()
>
>a.      CURRENT HEAD: These functions truncate the FSM pages and unused VM pages.
>
>b.      PATCH: Both functions only mark the pages to truncate and return a block number.
>
>-        We used to call smgrtruncate() in these functions, but these are now moved inside the RelationTruncate() and smgr_redo().
>
>-        The tentative renaming of the functions are: MarkFreeSpaceMapTruncateRel() and visibilitymap_mark_truncate(). Feel free to suggest better names.
>
>
>2.      RelationTruncate()
>
>a.      HEAD: Truncate FSM and VM first, then write WAL, and lastly truncate main fork.
>
>b.      PATCH: Now we mark FSM and VM pages first, write WAL, mark MAIN fork pages, then truncate all forks (MAIN, FSM, VM) simultaneously.
>
>
>3.      smgr_redo()
>
>a.      HEAD: Truncate main fork and the relation during XLOG replay, create fake rel cache for FSM and VM, truncate FSM, truncate VM, then free fake rel cache.
>
>b.      PATCH: Mark main fork dirty buffers, create fake rel cache, mark fsm and vm buffers, truncate marked pages of relation forks simultaneously, truncate relation during XLOG replay, then free fake rel cache.
>
>
>4.      smgrtruncate(), DropRelFileNodeBuffers()
>
>-        input arguments are changed to array of forknum and block numbers, int nforks (size of forkNum array)
>
>-        truncates the pages of relation forks simultaneously
>
>
>5.      smgrdounlinkfork()
>I modified the function because it calls DropRelFileNodeBuffers. However, this is a dead code that can be removed.
>I did not remove it for now because that's not for me but the community to decide.
>

You really don't need to extract the changes like this - such changes
are generally obvious from the diff.

You only need to explain things that are not obvious from the code
itself, e.g. non-trivial design decisions, etc.

>
>C.     Performance Test
>
>I setup a synchronous streaming replication between a master-standby.
>
>In postgresql.conf:
>autovacuum = off
>wal_level = replica
>max_wal_senders = 5
>wal_keep_segments = 16
>max_locks_per_transaction = 10000
>#shared_buffers = 8GB
>#shared_buffers = 24GB
>
>Objective: Measure VACUUM execution time; varying shared_buffers size.
>
>1. Create table (ex. 10,000 tables). Insert data to tables.
>2. DELETE FROM TABLE (ex. all rows of 10,000 tables)
>3. psql -c "\timing on" (measures total execution of SQL queries)
>4. VACUUM (whole db)
>
>If you want to test with large number of relations,
>you may use the stored functions I used here:
>http://bit.ly/reltruncates
>
>
>D.     Results
>
>HEAD results
>1) 128MB shared_buffers = 48.885 seconds
>2) 8GB shared_buffers = 5 min 30.695 s
>3) 24GB shared_buffers = 14 min 13.598 s
>
>PATCH results
>1) 128MB shared_buffers = 42.736 s
>2) 8GB shared_buffers = 2 min 26.464 s
>3) 24GB shared_buffers = 5 min 35.848 s
>
>The performance significantly improved compared to HEAD,
>especially for large shared buffers.
>

Right, that seems nice. And it matches the expected 1:3 speedup, at
least for the larger shared_buffers cases.

Years ago I've implemented an optimization for many DROP TABLE commands
in a single transaction - instead of scanning buffers for each relation,
the code now accumulates a small number of relations into an array, and
then does a bsearch for each buffer.

Would something like that be applicable/useful here? That is, if we do
multiple TRUNCATE commands in a single transaction, can we optimize it
like this?

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Speedup truncates of relation forks

Alvaro Herrera-9
On 2019-Jun-12, Tomas Vondra wrote:

> Years ago I've implemented an optimization for many DROP TABLE commands
> in a single transaction - instead of scanning buffers for each relation,
> the code now accumulates a small number of relations into an array, and
> then does a bsearch for each buffer.

commit 279628a0a7cf582f7dfb68e25b7b76183dd8ff2f:
    Accelerate end-of-transaction dropping of relations
   
    When relations are dropped, at end of transaction we need to remove the
    files and clean the buffer pool of buffers containing pages of those
    relations.  Previously we would scan the buffer pool once per relation
    to clean up buffers.  When there are many relations to drop, the
    repeated scans make this process slow; so we now instead pass a list of
    relations to drop and scan the pool once, checking each buffer against
    the passed list.  When the number of relations is larger than a
    threshold (which as of this patch is being set to 20 relations) we sort
    the array before starting, and bsearch the array; when it's smaller, we
    simply scan the array linearly each time, because that's faster.  The
    exact optimal threshold value depends on many factors, but the
    difference is not likely to be significant enough to justify making it
    user-settable.
   
    This has been measured to be a significant win (a 15x win when dropping
    100,000 relations; an extreme case, but reportedly a real one).
   
    Author: Tomas Vondra, some tweaks by me
    Reviewed by: Robert Haas, Shigeru Hanada, Andres Freund, Álvaro Herrera


--
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Reply | Threaded
Open this post in threaded view
|

RE: [PATCH] Speedup truncates of relation forks

Tsunakawa, Takayuki
In reply to this post by Tomas Vondra-4
From: Tomas Vondra [mailto:[hidden email]]
> Years ago I've implemented an optimization for many DROP TABLE commands
> in a single transaction - instead of scanning buffers for each relation,
> the code now accumulates a small number of relations into an array, and
> then does a bsearch for each buffer.
>
> Would something like that be applicable/useful here? That is, if we do
> multiple TRUNCATE commands in a single transaction, can we optimize it
> like this?

Unfortunately not.  VACUUM and autovacuum handles each table in a different transaction.

BTW, what we really want to do is to keep the failover time within 10 seconds.  The customer periodically TRUNCATEs tens of thousands of tables.  If failover unluckily happens immediately after those TRUNCATEs, the recovery on the standby could take much longer.  But your past improvement seems likely to prevent that problem, if the customer TRUNCATEs tables in the same transaction.

On the other hand, it's now highly possible that the customer can only TRUNCATE a single table in a transaction, thus run as many transactions as the TRUNCATEd tables.  So, we also want to speed up each TRUNCATE by touching only the buffers for the table, not scanning the whole shared buffers.  Andres proposed one method that uses a radix tree, but we don't have an idea how to do it yet.

Speeding up each TRUNCATE and its recovery is a different topic.  The patch proposed here is one possible improvement to shorten the failover time.


Regards
Takayuki Tsunakawa






Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Speedup truncates of relation forks

Masahiko Sawada
On Wed, Jun 12, 2019 at 12:25 PM Tsunakawa, Takayuki
<[hidden email]> wrote:

>
> From: Tomas Vondra [mailto:[hidden email]]
> > Years ago I've implemented an optimization for many DROP TABLE commands
> > in a single transaction - instead of scanning buffers for each relation,
> > the code now accumulates a small number of relations into an array, and
> > then does a bsearch for each buffer.
> >
> > Would something like that be applicable/useful here? That is, if we do
> > multiple TRUNCATE commands in a single transaction, can we optimize it
> > like this?
>
> Unfortunately not.  VACUUM and autovacuum handles each table in a different transaction.

We do RelationTruncate() also when we truncate heaps that are created
in the current transactions or has a new relfilenodes in the current
transaction. So I think there is a room for optimization Thomas
suggested, although I'm not sure it's a popular use case.

I've not look at this patch deeply but in DropRelFileNodeBuffer I
think we can get the min value of all firstDelBlock and use it as the
lower bound of block number that we're interested in. That way we can
skip checking the array during scanning the buffer pool.

-extern void smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum,
bool isRedo);
+extern void smgrdounlinkfork(SMgrRelation reln, ForkNumber *forknum,
+                                                        bool isRedo,
int nforks);
-extern void smgrtruncate(SMgrRelation reln, ForkNumber forknum,
-                                                BlockNumber nblocks);
+extern void smgrtruncate(SMgrRelation reln, ForkNumber *forknum,
+                                                BlockNumber *nblocks,
int nforks);

Don't we use each elements of nblocks for each fork? That is, each
fork uses an element at its fork number in the nblocks array and sets
InvalidBlockNumber for invalid slots, instead of passing the valid
number of elements. That way the following code that exist at many places,

    blocks[nforks] = visibilitymap_mark_truncate(rel, nblocks);
   if (BlockNumberIsValid(blocks[nforks]))
   {
       forks[nforks] = VISIBILITYMAP_FORKNUM;
       nforks++;
   }

would become

    blocks[VISIBILITYMAP_FORKNUM] = visibilitymap_mark_truncate(rel, nblocks);

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


Reply | Threaded
Open this post in threaded view
|

RE: [PATCH] Speedup truncates of relation forks

Jamison, Kirk
In reply to this post by Adrien Nayrat-2
On Tuesday, June 11, 2019 7:23 PM (GMT+9), Adrien Nayrat wrote:

> > Attached is a patch to speed up the performance of truncates of relations.
>
> Thanks for working on this!

Thank you also for taking a look at my thread.

> > If you want to test with large number of relations,
> > you may use the stored functions I used here:
> > http://bit.ly/reltruncates
>
> You should post these functions in this thread for the archives ;)
This is noted. Pasting it below:

create or replace function create_tables(numtabs int)
returns void as $$
declare query_string text;
begin
  for i in 1..numtabs loop
    query_string := 'create table tab_' || i::text || ' (a int);';
    execute query_string;
  end loop;
end;
$$ language plpgsql;

create or replace function delfrom_tables(numtabs int)
returns void as $$
declare query_string text;
begin
  for i in 1..numtabs loop
    query_string := 'delete from tab_' || i::text;
    execute query_string;
  end loop;
end;
$$ language plpgsql;

create or replace function insert_tables(numtabs int)
returns void as $$
declare query_string text;
begin
  for i in 1..numtabs loop
    query_string := 'insert into tab_' || i::text || ' VALUES (5);' ;
    execute query_string;
  end loop;
end;
$$ language plpgsql;


> From a user POW, the main issue with relation truncation is that it can block
> queries on standby server during truncation replay.
>
> It could be interesting if you can test this case and give results of your
> path.
> Maybe by performing read queries on standby server and counting wait_event
> with pg_wait_sampling?

Thanks for the suggestion. I tried using the extension pg_wait_sampling,
But I wasn't sure that I could replicate the problem of blocked queries on standby server.
Could you advise?
Here's what I did for now, similar to my previous test with hot standby setup,
but with additional read queries of wait events on standby server.

128MB shared_buffers
SELECT create_tables(10000);
SELECT insert_tables(10000);
SELECT delfrom_tables(10000);

[Before VACUUM]
Standby: SELECT the following view from pg_stat_waitaccum

wait_event_type |   wait_event    | calls | microsec
-----------------+-----------------+-------+----------
 Client          | ClientRead      |     2 | 20887759
 IO              | DataFileRead    |   175 |     2788
 IO              | RelationMapRead |     4 |       26
 IO              | SLRURead        |     2 |       38

Primary: Execute VACUUM (induces relation truncates)

[After VACUUM]
Standby:
 wait_event_type |   wait_event    | calls | microsec
-----------------+-----------------+-------+----------
 Client          | ClientRead      |     7 | 77662067
 IO              | DataFileRead    |   284 |     4523
 IO              | RelationMapRead |    10 |       51
 IO              | SLRURead        |     3 |       57

Regards,
Kirk Jamison
Reply | Threaded
Open this post in threaded view
|

RE: [PATCH] Speedup truncates of relation forks

Tsunakawa, Takayuki
In reply to this post by Masahiko Sawada
From: Masahiko Sawada [mailto:[hidden email]]
> We do RelationTruncate() also when we truncate heaps that are created
> in the current transactions or has a new relfilenodes in the current
> transaction. So I think there is a room for optimization Thomas
> suggested, although I'm not sure it's a popular use case.

Right, and I don't think of a use case that motivates the opmitizaion, too.


> I've not look at this patch deeply but in DropRelFileNodeBuffer I
> think we can get the min value of all firstDelBlock and use it as the
> lower bound of block number that we're interested in. That way we can
> skip checking the array during scanning the buffer pool.

That sounds reasonable, although I haven't examined the code, either.


> Don't we use each elements of nblocks for each fork? That is, each
> fork uses an element at its fork number in the nblocks array and sets
> InvalidBlockNumber for invalid slots, instead of passing the valid
> number of elements. That way the following code that exist at many places,

I think the current patch tries to reduce the loop count in DropRelFileNodeBuffers() by passing the number of target forks.


Regards
Takayuki Tsunakawa


 
Reply | Threaded
Open this post in threaded view
|

RE: [PATCH] Speedup truncates of relation forks

Jamison, Kirk
In reply to this post by Masahiko Sawada
On Wednesday, June 12, 2019 4:29 PM (GMT+9), Masahiko Sawada wrote:

> On Wed, Jun 12, 2019 at 12:25 PM Tsunakawa, Takayuki
> <[hidden email]> wrote:
> >
> > From: Tomas Vondra [mailto:[hidden email]]
> > > Years ago I've implemented an optimization for many DROP TABLE
> > > commands in a single transaction - instead of scanning buffers for
> > > each relation, the code now accumulates a small number of relations
> > > into an array, and then does a bsearch for each buffer.
> > >
> > > Would something like that be applicable/useful here? That is, if we
> > > do multiple TRUNCATE commands in a single transaction, can we
> > > optimize it like this?
> >
> > Unfortunately not.  VACUUM and autovacuum handles each table in a different
> transaction.
>
> We do RelationTruncate() also when we truncate heaps that are created in the
> current transactions or has a new relfilenodes in the current transaction.
> So I think there is a room for optimization Thomas suggested, although I'm
> not sure it's a popular use case.

I couldn't think of a use case too.

> I've not look at this patch deeply but in DropRelFileNodeBuffer I think we
> can get the min value of all firstDelBlock and use it as the lower bound of
> block number that we're interested in. That way we can skip checking the array
> during scanning the buffer pool.

I'll take note of this suggestion.
Could you help me expound more on this idea, skipping the internal loop by
comparing the min and buffer descriptor (bufHdr)?

In the current patch, I've implemented the following in DropRelFileNodeBuffers:
        for (i = 0; i < NBuffers; i++)
        {
                ...
                buf_state = LockBufHdr(bufHdr);
                for (k = 0; k < nforks; k++)
                {
                        if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&
                                bufHdr->tag.forkNum == forkNum[k] &&
                                bufHdr->tag.blockNum >= firstDelBlock[k])
                        {
                                InvalidateBuffer(bufHdr); /* releases spinlock */
                                break;
                        }

> Don't we use each elements of nblocks for each fork? That is, each fork uses
> an element at its fork number in the nblocks array and sets InvalidBlockNumber
> for invalid slots, instead of passing the valid number of elements. That way
> the following code that exist at many places,
>
>     blocks[nforks] = visibilitymap_mark_truncate(rel, nblocks);
>    if (BlockNumberIsValid(blocks[nforks]))
>    {
>        forks[nforks] = VISIBILITYMAP_FORKNUM;
>        nforks++;
>    }
>
> would become
>
>     blocks[VISIBILITYMAP_FORKNUM] = visibilitymap_mark_truncate(rel,
> nblocks);

In the patch, we want to truncate all forks' blocks simultaneously, so
we optimize the invalidation of buffers and reduce the number of loops
using those values.
The suggestion above would have to remove the forks array and its
forksize (nforks), is it correct? But I think we’d need the fork array
and nforks to execute the truncation all at once.
If I'm missing something, I'd really appreciate your further comments.

--
Thank you everyone for taking a look at my thread.
I've also already added this patch to the CommitFest app.

Regards,
Kirk Jamison
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Speedup truncates of relation forks

Masahiko Sawada
On Thu, Jun 13, 2019 at 6:30 PM Jamison, Kirk <[hidden email]> wrote:

>
> On Wednesday, June 12, 2019 4:29 PM (GMT+9), Masahiko Sawada wrote:
> > On Wed, Jun 12, 2019 at 12:25 PM Tsunakawa, Takayuki
> > <[hidden email]> wrote:
> > >
> > > From: Tomas Vondra [mailto:[hidden email]]
> > > > Years ago I've implemented an optimization for many DROP TABLE
> > > > commands in a single transaction - instead of scanning buffers for
> > > > each relation, the code now accumulates a small number of relations
> > > > into an array, and then does a bsearch for each buffer.
> > > >
> > > > Would something like that be applicable/useful here? That is, if we
> > > > do multiple TRUNCATE commands in a single transaction, can we
> > > > optimize it like this?
> > >
> > > Unfortunately not.  VACUUM and autovacuum handles each table in a different
> > transaction.
> >
> > We do RelationTruncate() also when we truncate heaps that are created in the
> > current transactions or has a new relfilenodes in the current transaction.
> > So I think there is a room for optimization Thomas suggested, although I'm
> > not sure it's a popular use case.
>
> I couldn't think of a use case too.
>
> > I've not look at this patch deeply but in DropRelFileNodeBuffer I think we
> > can get the min value of all firstDelBlock and use it as the lower bound of
> > block number that we're interested in. That way we can skip checking the array
> > during scanning the buffer pool.
>
> I'll take note of this suggestion.
> Could you help me expound more on this idea, skipping the internal loop by
> comparing the min and buffer descriptor (bufHdr)?
>

Yes. For example,

    BlockNumber minBlock = InvalidBlockNumber;
(snip)
    /* Get lower bound block number we're interested in */
    for (i = 0; i < nforks; i++)
    {
        if (!BlockNumberIsValid(minBlock) ||
            minBlock > firstDelBlock[i])
            minBlock = firstDelBlock[i];
    }

    for (i = 0; i < NBuffers; i++)
    {
(snip)
        buf_state = LockBufHdr(bufHdr);

        /* check with the lower bound and skip the loop */
        if (bufHdr->tag.blockNum < minBlock)
        {
            UnlockBufHdr(bufHdr, buf_state);
            continue;
        }

        for (k = 0; k < nforks; k++)
        {
            if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&
                bufHdr->tag.forkNum == forkNum[k] &&
                bufHdr->tag.blockNum >= firstDelBlock[k])

But since we acquire the buffer header lock after all and the number
of the internal loops is small (at most 3 for now)  the benefit will
not be big.

> In the current patch, I've implemented the following in DropRelFileNodeBuffers:
>         for (i = 0; i < NBuffers; i++)
>         {
>                 ...
>                 buf_state = LockBufHdr(bufHdr);
>                 for (k = 0; k < nforks; k++)
>                 {
>                         if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&
>                                 bufHdr->tag.forkNum == forkNum[k] &&
>                                 bufHdr->tag.blockNum >= firstDelBlock[k])
>                         {
>                                 InvalidateBuffer(bufHdr); /* releases spinlock */
>                                 break;
>                         }
>
> > Don't we use each elements of nblocks for each fork? That is, each fork uses
> > an element at its fork number in the nblocks array and sets InvalidBlockNumber
> > for invalid slots, instead of passing the valid number of elements. That way
> > the following code that exist at many places,
> >
> >     blocks[nforks] = visibilitymap_mark_truncate(rel, nblocks);
> >    if (BlockNumberIsValid(blocks[nforks]))
> >    {
> >        forks[nforks] = VISIBILITYMAP_FORKNUM;
> >        nforks++;
> >    }
> >
> > would become
> >
> >     blocks[VISIBILITYMAP_FORKNUM] = visibilitymap_mark_truncate(rel,
> > nblocks);
>
> In the patch, we want to truncate all forks' blocks simultaneously, so
> we optimize the invalidation of buffers and reduce the number of loops
> using those values.
> The suggestion above would have to remove the forks array and its
> forksize (nforks), is it correct? But I think we’d need the fork array
> and nforks to execute the truncation all at once.

I meant that each forks can use the its forknumber'th element of
firstDelBlock[]. For example, if firstDelBlock = {1000,
InvalidBlockNumber, 20, InvalidBlockNumber}, we can invalid buffers
pertaining both greater than block number 1000 of main and greater
than block number 20 of vm. Since firstDelBlock[FSM_FORKNUM] ==
InvalidBlockNumber we don't invalid buffers of fsm.

As Tsunakawa-san mentioned, since your approach would reduce the loop
count your idea might be better than mine which always takes 4 loop
counts.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


Reply | Threaded
Open this post in threaded view
|

RE: [PATCH] Speedup truncates of relation forks

Tsunakawa, Takayuki
From: Masahiko Sawada [mailto:[hidden email]]

>     for (i = 0; i < NBuffers; i++)
>     {
> (snip)
>         buf_state = LockBufHdr(bufHdr);
>
>         /* check with the lower bound and skip the loop */
>         if (bufHdr->tag.blockNum < minBlock)
>         {
>             UnlockBufHdr(bufHdr, buf_state);
>             continue;
>         }
>
>         for (k = 0; k < nforks; k++)
>         {
>             if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&
>                 bufHdr->tag.forkNum == forkNum[k] &&
>                 bufHdr->tag.blockNum >= firstDelBlock[k])
>
> But since we acquire the buffer header lock after all and the number
> of the internal loops is small (at most 3 for now)  the benefit will
> not be big.

Yeah, so I think we can just compare the block number without locking the buffer header here.


Regards
Takayuki Tsunakawa

Reply | Threaded
Open this post in threaded view
|

RE: [PATCH] Speedup truncates of relation forks

Jamison, Kirk
In reply to this post by Masahiko Sawada
Hi Sawada-san,

On Thursday, June 13, 2019 8:01 PM, Masahiko Sawada wrote:

> On Thu, Jun 13, 2019 at 6:30 PM Jamison, Kirk <[hidden email]>
> wrote:
> >
> > On Wednesday, June 12, 2019 4:29 PM (GMT+9), Masahiko Sawada wrote:
> > > ...
> > > I've not look at this patch deeply but in DropRelFileNodeBuffer I
> > > think we can get the min value of all firstDelBlock and use it as
> > > the lower bound of block number that we're interested in. That way
> > > we can skip checking the array during scanning the buffer pool.
> >
> > I'll take note of this suggestion.
> > Could you help me expound more on this idea, skipping the internal
> > loop by comparing the min and buffer descriptor (bufHdr)?
> >
>
> Yes. For example,
>
>     BlockNumber minBlock = InvalidBlockNumber;
> (snip)
>     /* Get lower bound block number we're interested in */
>     for (i = 0; i < nforks; i++)
>     {
>         if (!BlockNumberIsValid(minBlock) ||
>             minBlock > firstDelBlock[i])
>             minBlock = firstDelBlock[i];
>     }
>
>     for (i = 0; i < NBuffers; i++)
>     {
> (snip)
>         buf_state = LockBufHdr(bufHdr);
>
>         /* check with the lower bound and skip the loop */
>         if (bufHdr->tag.blockNum < minBlock)
>         {
>             UnlockBufHdr(bufHdr, buf_state);
>             continue;
>         }
>
>         for (k = 0; k < nforks; k++)
>         {
>             if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&
>                 bufHdr->tag.forkNum == forkNum[k] &&
>                 bufHdr->tag.blockNum >= firstDelBlock[k])
>
> But since we acquire the buffer header lock after all and the number of the
> internal loops is small (at most 3 for now)  the benefit will not be big.

Thank you very much for your kind and detailed explanation.
I'll still consider your suggestions in the next patch and optimize it more
so that we could possibly not need to acquire the LockBufHdr anymore.


> > > Don't we use each elements of nblocks for each fork? That is, each
> > > fork uses an element at its fork number in the nblocks array and
> > > sets InvalidBlockNumber for invalid slots, instead of passing the
> > > valid number of elements. That way the following code that exist at
> > > many places,
> > >
> > >     blocks[nforks] = visibilitymap_mark_truncate(rel, nblocks);
> > >    if (BlockNumberIsValid(blocks[nforks]))
> > >    {
> > >        forks[nforks] = VISIBILITYMAP_FORKNUM;
> > >        nforks++;
> > >    }
> > >
> > > would become
> > >
> > >     blocks[VISIBILITYMAP_FORKNUM] = visibilitymap_mark_truncate(rel,
> > > nblocks);
> >
> > In the patch, we want to truncate all forks' blocks simultaneously, so
> > we optimize the invalidation of buffers and reduce the number of loops
> > using those values.
> > The suggestion above would have to remove the forks array and its
> > forksize (nforks), is it correct? But I think we’d need the fork array
> > and nforks to execute the truncation all at once.
>
> I meant that each forks can use the its forknumber'th element of
> firstDelBlock[]. For example, if firstDelBlock = {1000, InvalidBlockNumber,
> 20, InvalidBlockNumber}, we can invalid buffers pertaining both greater than
> block number 1000 of main and greater than block number 20 of vm. Since
> firstDelBlock[FSM_FORKNUM] == InvalidBlockNumber we don't invalid buffers
> of fsm.
>
> As Tsunakawa-san mentioned, since your approach would reduce the loop count
> your idea might be better than mine which always takes 4 loop counts.

Understood. Thank you again for the kind and detailed explanations.
I'll reconsider these approaches.

Regards,
Kirk Jamison
Reply | Threaded
Open this post in threaded view
|

RE: [PATCH] Speedup truncates of relation forks

Jamison, Kirk
Hi all,

Attached is the v2 of the patch. I added the optimization that Sawada-san
suggested for DropRelFileNodeBuffers, although I did not acquire the lock
when comparing the minBlock and target block.

There's actually a comment written in the source code that we could
pre-check buffer tag for forkNum and blockNum, but given that FSM and VM
blocks are small compared to main fork's, the additional benefit of doing so
would be small.

>* We could check forkNum and blockNum as well as the rnode, but the
>* incremental win from doing so seems small.

I personally think it's alright not to include the suggested pre-checking.
If that's the case, we can just follow the patch v1 version.

Thoughts?

Comments and reviews from other parts of the patch are also very much welcome.

Regards,
Kirk Jamison

v2-0001-Speedup-truncates-of-relation-forks.patch (31K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] Speedup truncates of relation forks

Adrien Nayrat-2
In reply to this post by Jamison, Kirk
On 6/12/19 10:29 AM, Jamison, Kirk wrote:

>
>> From a user POW, the main issue with relation truncation is that it can block
>> queries on standby server during truncation replay.
>>
>> It could be interesting if you can test this case and give results of your
>> path.
>> Maybe by performing read queries on standby server and counting wait_event
>> with pg_wait_sampling?
>
> Thanks for the suggestion. I tried using the extension pg_wait_sampling,
> But I wasn't sure that I could replicate the problem of blocked queries on standby server.
> Could you advise?
> Here's what I did for now, similar to my previous test with hot standby setup,
> but with additional read queries of wait events on standby server.
>
> 128MB shared_buffers
> SELECT create_tables(10000);
> SELECT insert_tables(10000);
> SELECT delfrom_tables(10000);
>
> [Before VACUUM]
> Standby: SELECT the following view from pg_stat_waitaccum
>
> wait_event_type |   wait_event    | calls | microsec
> -----------------+-----------------+-------+----------
>  Client          | ClientRead      |     2 | 20887759
>  IO              | DataFileRead    |   175 |     2788
>  IO              | RelationMapRead |     4 |       26
>  IO              | SLRURead        |     2 |       38
>
> Primary: Execute VACUUM (induces relation truncates)
>
> [After VACUUM]
> Standby:
>  wait_event_type |   wait_event    | calls | microsec
> -----------------+-----------------+-------+----------
>  Client          | ClientRead      |     7 | 77662067
>  IO              | DataFileRead    |   284 |     4523
>  IO              | RelationMapRead |    10 |       51
>  IO              | SLRURead        |     3 |       57
>
(Sorry for the delay, I forgot to answer you)

As far as I remember, you should see "relation" wait events (type lock) on
standby server. This is due to startup process acquiring AccessExclusiveLock for
the truncation and other backend waiting to acquire a lock to read the table.

On primary server, vacuum is able to cancel truncation:

/*
 * We need full exclusive lock on the relation in order to do
 * truncation. If we can't get it, give up rather than waiting --- we
 * don't want to block other backends, and we don't want to deadlock
 * (which is quite possible considering we already hold a lower-grade
 * lock).
 */
vacrelstats->lock_waiter_detected = false;
lock_retry = 0;
while (true)
{
    if (ConditionalLockRelation(onerel, AccessExclusiveLock))
        break;

    /*
     * Check for interrupts while trying to (re-)acquire the exclusive
     * lock.
     */
    CHECK_FOR_INTERRUPTS();

    if (++lock_retry > (VACUUM_TRUNCATE_LOCK_TIMEOUT /
                        VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL))
    {
        /*
         * We failed to establish the lock in the specified number of
         * retries. This means we give up truncating.
         */
        vacrelstats->lock_waiter_detected = true;
        ereport(elevel,
                (errmsg("\"%s\": stopping truncate due to conflicting lock request",
                        RelationGetRelationName(onerel))));
        return;
    }

    pg_usleep(VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL * 1000L);
}


To maximize chances to reproduce we can use big shared_buffers. But I am afraid
it is not easy to perform reproducible tests to compare results. Unfortunately I
don't have servers to perform tests.

Regards,


signature.asc (499 bytes) Download Attachment