pg_dump fails when a table is in ACCESS SHARE MODE

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

pg_dump fails when a table is in ACCESS SHARE MODE

Viral Shah
Hello All,

I am facing some problems while taking logical backup of my database using pg_dump. We have set up our infrastructure such that the logical backups are taken on our DR. Our intention here is to avoid substantial load on our primary server. We also pause and play wal files using pg_wal_replay_pause and pg_wal_replay_resume before and after pg_dump. However, at the time of pausing the wal replay, if there is a table under ACCESS SHARE MODE, the pg_dump cannot complete the backup. 

Can anyone suggest any better solution to take a logical backup using pg_dump where the table lock doesn't result in failure of the logical backup?

PS: we are using PostgreSQL 10.12

Thanks, 
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548
Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

ichbinrene

On Tue, Feb 25, 2020 at 9:11 AM Viral Shah <[hidden email]> wrote:
Hello All,

I am facing some problems while taking logical backup of my database using pg_dump. We have set up our infrastructure such that the logical backups are taken on our DR. Our intention here is to avoid substantial load on our primary server. We also pause and play wal files using pg_wal_replay_pause and pg_wal_replay_resume before and after pg_dump. However, at the time of pausing the wal replay, if there is a table under ACCESS SHARE MODE, the pg_dump cannot complete the backup. 

Can anyone suggest any better solution to take a logical backup using pg_dump where the table lock doesn't result in failure of the logical backup?

PS: we are using PostgreSQL 10.12

Thanks, 
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548

Hi Viral, what's the error pg_dump is throwing at you ? what's the current setting of standby_feedback in your replica? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

Viral Shah
Hello Rene,

Since I am using pg_wal_replay_pause before my pg_dump, it doesn't throw any error. Instead, it waits for the resource to release the lock so that it can proceed with taking the dump. the hot_standby_feedback is set to on
on my replica and primary.

Thanks,
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548


On Tue, Feb 25, 2020 at 10:36 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:11 AM Viral Shah <[hidden email]> wrote:
Hello All,

I am facing some problems while taking logical backup of my database using pg_dump. We have set up our infrastructure such that the logical backups are taken on our DR. Our intention here is to avoid substantial load on our primary server. We also pause and play wal files using pg_wal_replay_pause and pg_wal_replay_resume before and after pg_dump. However, at the time of pausing the wal replay, if there is a table under ACCESS SHARE MODE, the pg_dump cannot complete the backup. 

Can anyone suggest any better solution to take a logical backup using pg_dump where the table lock doesn't result in failure of the logical backup?

PS: we are using PostgreSQL 10.12

Thanks, 
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548

Hi Viral, what's the error pg_dump is throwing at you ? what's the current setting of standby_feedback in your replica? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

ichbinrene

On Tue, Feb 25, 2020 at 9:45 AM Viral Shah <[hidden email]> wrote:
Hello Rene,

Since I am using pg_wal_replay_pause before my pg_dump, it doesn't throw any error. Instead, it waits for the resource to release the lock so that it can proceed with taking the dump. the hot_standby_feedback is set to on
on my replica and primary.

Thanks,
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548


On Tue, Feb 25, 2020 at 10:36 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:11 AM Viral Shah <[hidden email]> wrote:
Hello All,

I am facing some problems while taking logical backup of my database using pg_dump. We have set up our infrastructure such that the logical backups are taken on our DR. Our intention here is to avoid substantial load on our primary server. We also pause and play wal files using pg_wal_replay_pause and pg_wal_replay_resume before and after pg_dump. However, at the time of pausing the wal replay, if there is a table under ACCESS SHARE MODE, the pg_dump cannot complete the backup. 

Can anyone suggest any better solution to take a logical backup using pg_dump where the table lock doesn't result in failure of the logical backup?

PS: we are using PostgreSQL 10.12

Thanks, 
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548

Hi Viral, what's the error pg_dump is throwing at you ? what's the current setting of standby_feedback in your replica? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Nothing off the top of my head that could explain this behavior, if possible, try disabling hot_standby_feedback on the replica (in the master it has no effect) for a bit and see if that helps. Access share mode kind of locks shouldn't interfere with pg_dump, are both master and slave the exact same versions including the minor releases? it might be a bug, try upgrading to the latests minor releases in both ends. Since when did you start seeing this behavior,  how long have you been on PG10 and creating backups this way? any recent modification in configurations?
Since you're using PG10, you could try logical replication, it's more reliable for this sort of thing, since the activity on the master doesn't affect the slave lock-wise whatsoever.

--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

ichbinrene


On Tue, Feb 25, 2020 at 10:27 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:45 AM Viral Shah <[hidden email]> wrote:
Hello Rene,

Since I am using pg_wal_replay_pause before my pg_dump, it doesn't throw any error. Instead, it waits for the resource to release the lock so that it can proceed with taking the dump. the hot_standby_feedback is set to on
on my replica and primary.

Thanks,
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548


On Tue, Feb 25, 2020 at 10:36 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:11 AM Viral Shah <[hidden email]> wrote:
Hello All,

I am facing some problems while taking logical backup of my database using pg_dump. We have set up our infrastructure such that the logical backups are taken on our DR. Our intention here is to avoid substantial load on our primary server. We also pause and play wal files using pg_wal_replay_pause and pg_wal_replay_resume before and after pg_dump. However, at the time of pausing the wal replay, if there is a table under ACCESS SHARE MODE, the pg_dump cannot complete the backup. 

Can anyone suggest any better solution to take a logical backup using pg_dump where the table lock doesn't result in failure of the logical backup?

PS: we are using PostgreSQL 10.12

Thanks, 
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548

Hi Viral, what's the error pg_dump is throwing at you ? what's the current setting of standby_feedback in your replica? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Nothing off the top of my head that could explain this behavior, if possible, try disabling hot_standby_feedback on the replica (in the master it has no effect) for a bit and see if that helps. Access share mode kind of locks shouldn't interfere with pg_dump, are both master and slave the exact same versions including the minor releases? it might be a bug, try upgrading to the latests minor releases in both ends. Since when did you start seeing this behavior,  how long have you been on PG10 and creating backups this way? any recent modification in configurations?
Since you're using PG10, you could try logical replication, it's more reliable for this sort of thing, since the activity on the master doesn't affect the slave lock-wise whatsoever.

--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison



Also, check for the occurrence of long running transactions on the master. Does the replica serve as a data source for analytic applications (reports) ? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

Viral Shah
Hello Rene,

Apologies for the late reply. To answer your question, disabling hot_standby_feedback doesn't help. Also, both my master and standby runs on the same PostgreSQL Minor version 10.12. One other thing that I have tried is to create a standalone server and try taking the backup of the same database using pg_dump with parallel jobs set to 10. I make sure there are no processes running on the database other than the autovacuum jobs. Inspite of this, the dump fails with the following error:

pg_dump: [archiver (db)] query failed: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: [parallel archiver] a worker process died unexpectedly

Any suggestions would be appreciated. :)

Thanks
Viral Shah
Senior Data Analyst, Nodal Exchange LLC

On Tue, Feb 25, 2020 at 1:32 PM Rene Romero Benavides <[hidden email]> wrote:


On Tue, Feb 25, 2020 at 10:27 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:45 AM Viral Shah <[hidden email]> wrote:
Hello Rene,

Since I am using pg_wal_replay_pause before my pg_dump, it doesn't throw any error. Instead, it waits for the resource to release the lock so that it can proceed with taking the dump. the hot_standby_feedback is set to on
on my replica and primary.

Thanks,
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548


On Tue, Feb 25, 2020 at 10:36 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:11 AM Viral Shah <[hidden email]> wrote:
Hello All,

I am facing some problems while taking logical backup of my database using pg_dump. We have set up our infrastructure such that the logical backups are taken on our DR. Our intention here is to avoid substantial load on our primary server. We also pause and play wal files using pg_wal_replay_pause and pg_wal_replay_resume before and after pg_dump. However, at the time of pausing the wal replay, if there is a table under ACCESS SHARE MODE, the pg_dump cannot complete the backup. 

Can anyone suggest any better solution to take a logical backup using pg_dump where the table lock doesn't result in failure of the logical backup?

PS: we are using PostgreSQL 10.12

Thanks, 
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548

Hi Viral, what's the error pg_dump is throwing at you ? what's the current setting of standby_feedback in your replica? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Nothing off the top of my head that could explain this behavior, if possible, try disabling hot_standby_feedback on the replica (in the master it has no effect) for a bit and see if that helps. Access share mode kind of locks shouldn't interfere with pg_dump, are both master and slave the exact same versions including the minor releases? it might be a bug, try upgrading to the latests minor releases in both ends. Since when did you start seeing this behavior,  how long have you been on PG10 and creating backups this way? any recent modification in configurations?
Since you're using PG10, you could try logical replication, it's more reliable for this sort of thing, since the activity on the master doesn't affect the slave lock-wise whatsoever.

--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison



Also, check for the occurrence of long running transactions on the master. Does the replica serve as a data source for analytic applications (reports) ? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

ichbinrene
Did you try increasing max_locks_per_transaction as the hint message says? what's your current settings for both max_locks_per_transaction and shared_buffers? 

On Thu, Apr 30, 2020 at 5:15 PM Viral Shah <[hidden email]> wrote:
Hello Rene,

Apologies for the late reply. To answer your question, disabling hot_standby_feedback doesn't help. Also, both my master and standby runs on the same PostgreSQL Minor version 10.12. One other thing that I have tried is to create a standalone server and try taking the backup of the same database using pg_dump with parallel jobs set to 10. I make sure there are no processes running on the database other than the autovacuum jobs. Inspite of this, the dump fails with the following error:

pg_dump: [archiver (db)] query failed: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: [parallel archiver] a worker process died unexpectedly

Any suggestions would be appreciated. :)

Thanks
Viral Shah
Senior Data Analyst, Nodal Exchange LLC

On Tue, Feb 25, 2020 at 1:32 PM Rene Romero Benavides <[hidden email]> wrote:


On Tue, Feb 25, 2020 at 10:27 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:45 AM Viral Shah <[hidden email]> wrote:
Hello Rene,

Since I am using pg_wal_replay_pause before my pg_dump, it doesn't throw any error. Instead, it waits for the resource to release the lock so that it can proceed with taking the dump. the hot_standby_feedback is set to on
on my replica and primary.

Thanks,
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548


On Tue, Feb 25, 2020 at 10:36 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:11 AM Viral Shah <[hidden email]> wrote:
Hello All,

I am facing some problems while taking logical backup of my database using pg_dump. We have set up our infrastructure such that the logical backups are taken on our DR. Our intention here is to avoid substantial load on our primary server. We also pause and play wal files using pg_wal_replay_pause and pg_wal_replay_resume before and after pg_dump. However, at the time of pausing the wal replay, if there is a table under ACCESS SHARE MODE, the pg_dump cannot complete the backup. 

Can anyone suggest any better solution to take a logical backup using pg_dump where the table lock doesn't result in failure of the logical backup?

PS: we are using PostgreSQL 10.12

Thanks, 
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548

Hi Viral, what's the error pg_dump is throwing at you ? what's the current setting of standby_feedback in your replica? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Nothing off the top of my head that could explain this behavior, if possible, try disabling hot_standby_feedback on the replica (in the master it has no effect) for a bit and see if that helps. Access share mode kind of locks shouldn't interfere with pg_dump, are both master and slave the exact same versions including the minor releases? it might be a bug, try upgrading to the latests minor releases in both ends. Since when did you start seeing this behavior,  how long have you been on PG10 and creating backups this way? any recent modification in configurations?
Since you're using PG10, you could try logical replication, it's more reliable for this sort of thing, since the activity on the master doesn't affect the slave lock-wise whatsoever.

--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison



Also, check for the occurrence of long running transactions on the master. Does the replica serve as a data source for analytic applications (reports) ? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison




--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/

Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

Viral Shah
The pg_dump works if I increase max_locks_per_transaction from its default value to 150. However, what I am trying to figure out is why it won't work at 64. 

Per the postgres documentation,
max_locks_per_transaction * (max_connections + max_prepared_transactions)
this formula determines the max no of allowed objects that can be locked on the database. Currently my database has the following values:
max_locks_per_transaction = 64 (default)
max_connections = 100
max_prepared_transactions = 0
Using this value in the above formula tells that our database or rather postgres server can/should handle 6400 locked objects at a time.

What is surprising is why Postgres complains of insufficient locks per transaction if only 10 processes (parallel jobs in pg_dump) are running on the database while taking the dump.

And to answer your question, currently, max_locks_per_transaction is 64 and shared_mem_buffer is 30GB

Thanks,
Viral Shah
Senior Data Analyst, Nodal Exchange LLC



On Thu, Apr 30, 2020 at 10:58 PM Rene Romero Benavides <[hidden email]> wrote:
Did you try increasing max_locks_per_transaction as the hint message says? what's your current settings for both max_locks_per_transaction and shared_buffers? 

On Thu, Apr 30, 2020 at 5:15 PM Viral Shah <[hidden email]> wrote:
Hello Rene,

Apologies for the late reply. To answer your question, disabling hot_standby_feedback doesn't help. Also, both my master and standby runs on the same PostgreSQL Minor version 10.12. One other thing that I have tried is to create a standalone server and try taking the backup of the same database using pg_dump with parallel jobs set to 10. I make sure there are no processes running on the database other than the autovacuum jobs. Inspite of this, the dump fails with the following error:

pg_dump: [archiver (db)] query failed: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: [parallel archiver] a worker process died unexpectedly

Any suggestions would be appreciated. :)

Thanks
Viral Shah
Senior Data Analyst, Nodal Exchange LLC

On Tue, Feb 25, 2020 at 1:32 PM Rene Romero Benavides <[hidden email]> wrote:


On Tue, Feb 25, 2020 at 10:27 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:45 AM Viral Shah <[hidden email]> wrote:
Hello Rene,

Since I am using pg_wal_replay_pause before my pg_dump, it doesn't throw any error. Instead, it waits for the resource to release the lock so that it can proceed with taking the dump. the hot_standby_feedback is set to on
on my replica and primary.

Thanks,
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548


On Tue, Feb 25, 2020 at 10:36 AM Rene Romero Benavides <[hidden email]> wrote:

On Tue, Feb 25, 2020 at 9:11 AM Viral Shah <[hidden email]> wrote:
Hello All,

I am facing some problems while taking logical backup of my database using pg_dump. We have set up our infrastructure such that the logical backups are taken on our DR. Our intention here is to avoid substantial load on our primary server. We also pause and play wal files using pg_wal_replay_pause and pg_wal_replay_resume before and after pg_dump. However, at the time of pausing the wal replay, if there is a table under ACCESS SHARE MODE, the pg_dump cannot complete the backup. 

Can anyone suggest any better solution to take a logical backup using pg_dump where the table lock doesn't result in failure of the logical backup?

PS: we are using PostgreSQL 10.12

Thanks, 
Viral Shah
Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548

Hi Viral, what's the error pg_dump is throwing at you ? what's the current setting of standby_feedback in your replica? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison


Nothing off the top of my head that could explain this behavior, if possible, try disabling hot_standby_feedback on the replica (in the master it has no effect) for a bit and see if that helps. Access share mode kind of locks shouldn't interfere with pg_dump, are both master and slave the exact same versions including the minor releases? it might be a bug, try upgrading to the latests minor releases in both ends. Since when did you start seeing this behavior,  how long have you been on PG10 and creating backups this way? any recent modification in configurations?
Since you're using PG10, you could try logical replication, it's more reliable for this sort of thing, since the activity on the master doesn't affect the slave lock-wise whatsoever.

--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison



Also, check for the occurrence of long running transactions on the master. Does the replica serve as a data source for analytic applications (reports) ? 
--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison




--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/

Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

Tom Lane-2
Viral Shah <[hidden email]> writes:
> Per the postgres documentation,
> *max_locks_per_transaction * (max_connections + max_prepared_transactions)*
> this formula determines the max no of allowed objects that can be locked on
> the database. Currently my database has the following values:
> *max_locks_per_transaction = 64 (default)*
> *max_connections = 100*
> *max_prepared_transactions = 0*
> Using this value in the above formula tells that our database or rather
> postgres server can/should handle *6400* locked objects at a time.

Right.

> What is surprising is why Postgres complains of insufficient locks per
> transaction if only 10 processes (parallel jobs in pg_dump) are running on
> the database while taking the dump.

They're taking more than 6400 locks, evidently.   How many tables are
in your database?  Have you tried looking into pg_locks while the dump
is running?

                        regards, tom lane


Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

Viral Shah
Hello Tom,

I have about 7400 tables in my database. When I ran a select on pg_locks while attempting a pg_dump, pg_locks returned with about 7400 rows all originating from one process id that was running select pg_catalog.pg_get_statisticsobjdef() by pg_dump. 
Now, this does explain why I need to increase the max_transaction_locks so that I can account for all the locks pg_locks is running into while pg_dump is in progress. I am still trying to understand why pg_get_statisticsobjdef() creates locks (AccessShare) on all the base tables at once leading to this issue?

Thanks,
Viral Shah
Senior Data Analyst, Nodal Exchange LLC
[hidden email]
<a href="tel:%28240%29%20645%207548" value="+12406457548" style="color:rgb(17,85,204)" target="_blank">(240) 645 7548


On Fri, May 1, 2020 at 10:18 AM Tom Lane <[hidden email]> wrote:
Viral Shah <[hidden email]> writes:
> Per the postgres documentation,
> *max_locks_per_transaction * (max_connections + max_prepared_transactions)*
> this formula determines the max no of allowed objects that can be locked on
> the database. Currently my database has the following values:
> *max_locks_per_transaction = 64 (default)*
> *max_connections = 100*
> *max_prepared_transactions = 0*
> Using this value in the above formula tells that our database or rather
> postgres server can/should handle *6400* locked objects at a time.

Right.

> What is surprising is why Postgres complains of insufficient locks per
> transaction if only 10 processes (parallel jobs in pg_dump) are running on
> the database while taking the dump.

They're taking more than 6400 locks, evidently.   How many tables are
in your database?  Have you tried looking into pg_locks while the dump
is running?

                        regards, tom lane
Reply | Threaded
Open this post in threaded view
|

Re: pg_dump fails when a table is in ACCESS SHARE MODE

David G Johnston
On Friday, May 1, 2020, Viral Shah <[hidden email]> wrote:
Now, this does explain why I need to increase the max_transaction_locks so that I can account for all the locks pg_locks is running into while pg_dump is in progress. I am still trying to understand why pg_get_statisticsobjdef() creates locks (AccessShare) on all the base tables at once leading to this issue?

You are dumping your entire database as a consistent point-in-time whole - how would that be doable without locking every data containing relation is the database?

David J.