how to investigate GIN fast updates and cleanup cycles?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

how to investigate GIN fast updates and cleanup cycles?

Steve Kehlet
This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4 hours) slowness with UPDATEs on a table that has a GIN index on a JSONB column. During these episodes, UPDATEs that normally take < 1sec take upwards of 2-4 minutes, and all finish simultaneously, like they were all blocked on something and finally got released.

My team and I have done the usual stuff looking at the system itself and its underlying infrastructure, and we don't think that's it. It does seem to be localized to just this table.

We're reading about GIN fast updates and noted that:

As of PostgreSQL 8.4, GIN is capable of postponing much of this work by inserting new tuples into a temporary, unsorted list of pending entries. When the table is vacuumed, or if the pending list becomes too large (larger than work_mem), the entries are moved to the main GIN data structure using the same bulk insert techniques used during initial index creation. This greatly improves GIN index update speed, even counting the additional vacuum overhead. Moreover the overhead work can be done by a background process instead of in foreground query processing.
The main disadvantage of this approach is that searches must scan the list of pending entries in addition to searching the regular index, and so a large list of pending entries will slow searches significantly. Another disadvantage is that, while most updates are fast, an update that causes the pending list to become "too large" will incur an immediate cleanup cycle and thus be much slower than other updates. 

I am wondering if the UPDATEs are hitting some blockage on the table due to this cleanup cycle. Is this a possibility, and if so how can I investigate this? What queries can I run or stats can I check to see if this pending list is approaching work_mem, or we're hitting this cleanup cycle? Is there any logging I can enable to see when these cleanups are occurring?

One test we are doing right now is running VACUUMs continuously on the table to see if this slowness stops happening. We should know within a few hours. If this seems to help, we'll try turning off FASTUPDATE on the index.

Any other ideas or suggestions would be appreciated, thanks!

My non-default postgres settings are [here in a gist](https://gist.github.com/skehlet/fd4945f9c9ce6ed075f1).

Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Tom Lane-2
Steve Kehlet <[hidden email]> writes:
> This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4
> hours) slowness with UPDATEs on a table that has a GIN index on a JSONB
> column. During these episodes, UPDATEs that normally take < 1sec take
> upwards of 2-4 minutes, and all finish simultaneously, like they were all
> blocked on something and finally got released.

Hm ... have you tried checking pg_locks to see if they're blocked on
something identifiable?

You might be right that this is caused by flushing the GIN pending list,
but I thought that that was not supposed to block concurrent insertions.
What I'd expect to see is *one* insert taking significantly longer than
normal, but no effect on concurrent operations.  Also, 2-4 minutes sounds
much longer than should be needed to flush a 10MB pending list, anyway.

                        regards, tom lane


--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Steve Kehlet
On Fri, Aug 28, 2015 at 10:11 AM Tom Lane <[hidden email]> wrote:
Steve Kehlet <[hidden email]> writes:
> This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4
> hours) slowness with UPDATEs on a table that has a GIN index on a JSONB
> column. During these episodes, UPDATEs that normally take < 1sec take
> upwards of 2-4 minutes, and all finish simultaneously, like they were all
> blocked on something and finally got released.

Hm ... have you tried checking pg_locks to see if they're blocked on
something identifiable?

Yes, I should have mentioned that, I have a cronjob going every minute dumping out [blocked/blocking queries](https://gist.github.com/skehlet/fbf5f52e18149e14e520) and nothing has shown up related to these queries (there were some other normal unrelated results, so I believe the job+query itself are working). After several incidents I believe it would have logged something.
 
You might be right that this is caused by flushing the GIN pending list,
but I thought that that was not supposed to block concurrent insertions.
What I'd expect to see is *one* insert taking significantly longer than
normal, but no effect on concurrent operations.  Also, 2-4 minutes sounds
much longer than should be needed to flush a 10MB pending list, anyway.

Yeah head scratch. That is really weird. Still gathering data, any way I can see for sure when these cleanup cycles are occurring?
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Tom Lane-2
Steve Kehlet <[hidden email]> writes:
> Yeah head scratch. That is really weird. Still gathering data, any way I
> can see for sure when these cleanup cycles are occurring?

contrib/pgstattuple has a function that can report the current size of the
pending list for a GIN index.  If you were to capture that every so often,
it'd be pretty easy to confirm or refute whether these episodes are
associated with list cleanup.

                        regards, tom lane


--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Tom Lane-2
In reply to this post by Steve Kehlet
Steve Kehlet <[hidden email]> writes:
> On Fri, Aug 28, 2015 at 10:11 AM Tom Lane <[hidden email]> wrote:
>> Hm ... have you tried checking pg_locks to see if they're blocked on
>> something identifiable?

> Yes, I should have mentioned that, I have a cronjob going every minute
> dumping out [blocked/blocking queries](
> https://gist.github.com/skehlet/fbf5f52e18149e14e520) and nothing has shown
> up related to these queries (there were some other normal unrelated
> results, so I believe the job+query itself are working).

BTW, I think your query is probably missing some cases:

( blockingl.transactionid=blockedl.transactionid
    OR
    (blockingl.relation=blockedl.relation AND blockingl.locktype=blockedl.locktype)
  )

This supposes that locks of different strengths don't block each other,
which is certainly wrong.  I think you could probably just drop the check
on locktype.  You might want to tighten the WHERE to "WHERE
blockingl.granted AND NOT blockedl.granted", as well.

                        regards, tom lane


--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Jeff Janes
In reply to this post by Steve Kehlet
On Fri, Aug 28, 2015 at 10:00 AM, Steve Kehlet <[hidden email]> wrote:
This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4 hours) slowness with UPDATEs on a table that has a GIN index on a JSONB column. During these episodes, UPDATEs that normally take < 1sec take upwards of 2-4 minutes, and all finish simultaneously, like they were all blocked on something and finally got released.

My team and I have done the usual stuff looking at the system itself and its underlying infrastructure, and we don't think that's it. It does seem to be localized to just this table.

We're reading about GIN fast updates and noted that:

As of PostgreSQL 8.4, GIN is capable of postponing much of this work by inserting new tuples into a temporary, unsorted list of pending entries. When the table is vacuumed, or if the pending list becomes too large (larger than work_mem), the entries are moved to the main GIN data structure using the same bulk insert techniques used during initial index creation. This greatly improves GIN index update speed, even counting the additional vacuum overhead. Moreover the overhead work can be done by a background process instead of in foreground query processing.
The main disadvantage of this approach is that searches must scan the list of pending entries in addition to searching the regular index, and so a large list of pending entries will slow searches significantly. Another disadvantage is that, while most updates are fast, an update that causes the pending list to become "too large" will incur an immediate cleanup cycle and thus be much slower than other updates. 

I am wondering if the UPDATEs are hitting some blockage on the table due to this cleanup cycle. Is this a possibility, and if so how can I investigate this? What queries can I run or stats can I check to see if this pending list is approaching work_mem, or we're hitting this cleanup cycle? Is there any logging I can enable to see when these cleanups are occurring?

That is hard to do in 9.4.  PostgreSQL 9.5 will add the gin_metapage_info function to the pageinspect module which makes it easier. If you don't mind doing some hacking, you could probably get the new pageinspect to run on 9.4.  

Or what I usually do in a case like this is clone the database to a test/QA server then run pg_upgrade to get that running on 9.5, then hope what I learn transfers back to production.

But the symptoms you describe are exactly what I expect from these clean up problems, so I would just assume that that is the problem.

The easiest solution is to turn of fastupdate for that index.  Each update will then be individually slower, but you won't have the periodic lock up you currently do.  If that doesn't work for you, then you can change the autoanalyze settings for the table so that it get auto-analyzed very frequently.
 

One test we are doing right now is running VACUUMs continuously on the table to see if this slowness stops happening. We should know within a few hours. If this seems to help, we'll try turning off FASTUPDATE on the index.

Vacuum is overkill (and can be extremely slow to run a large gin index), you just need to get it to autoanalyze by changing the per-table setting of "autovacuum_vacuum_scale_factor" to zero and instead using "autovacuum_analyze_threshold" to control the autovacuum process for that table.  Note that a manual ANALYZE will *not* clear the pending list, it has to be an autoanalyze.  (Manual VACUUM will clear the pending list, but you might have trouble getting manual VACUUM to complete fast enough)



Any other ideas or suggestions would be appreciated, thanks!


The bottleneck is generally a lot of random IO.  Can you put the gin index on SSD?

Cheers,

Jeff
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Tom Lane-2
In reply to this post by Tom Lane-2
I wrote:
> BTW, I think your query is probably missing some cases:

> ( blockingl.transactionid=blockedl.transactionid
>     OR
>     (blockingl.relation=blockedl.relation AND blockingl.locktype=blockedl.locktype)
>   )

> This supposes that locks of different strengths don't block each other,
> which is certainly wrong.

Oh, wait, scratch that: I was confusing locktype with mode.  I'm still
suspicious about whether this is covering all cases though.  It's
certainly not trying very hard to make sure the locks match up; though
that would tend to lead to too many reports not too few.  Still, you could
be missing a lock conflict on some other kind of lock.  I'd suggest
writing this part along the lines of

(blockingl.locktype,
 blockingl.database,
 blockingl.relation,
 blockingl.page,
 blockingl.tuple,
 blockingl.virtualxid,
 blockingl.transactionid,
 blockingl.classid,
 blockingl.objid,
 blockingl.objsubid)
 IS NOT DISTINCT FROM
(blockedl.locktype,
 blockedl.database,
 blockedl.relation,
 blockedl.page,
 blockedl.tuple,
 blockedl.virtualxid,
 blockedl.transactionid,
 blockedl.classid,
 blockedl.objid,
 blockedl.objsubid)

which should reliably give the right answer.

                        regards, tom lane


--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Tom Lane-2
In reply to this post by Jeff Janes
Jeff Janes <[hidden email]> writes:
> That is hard to do in 9.4.  PostgreSQL 9.5 will add the gin_metapage_info
> function to the pageinspect module which makes it easier.

pgstatginindex() is there in 9.4 ...

                        regards, tom lane


--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Steve Kehlet
In reply to this post by Jeff Janes
On Fri, Aug 28, 2015 at 10:42 AM Jeff Janes <[hidden email]> wrote:
Or what I usually do in a case like this is clone the database to a test/QA server then run pg_upgrade to get that running on 9.5, then hope what I learn transfers back to production.

I'll save this great idea.
 
But the symptoms you describe are exactly what I expect from these clean up problems, so I would just assume that that is the problem.

The easiest solution is to turn of fastupdate for that index.  Each update will then be individually slower, but you won't have the periodic lock up you currently do.

That would be fine and we will try this.
 
Vacuum is overkill (and can be extremely slow to run a large gin index), you just need to get it to autoanalyze by changing the per-table setting of "autovacuum_vacuum_scale_factor" to zero and instead using

Did you mean autovacuum_analyze_scale_factor or does it not matter?

I'm trying to force an autovacuum/autoanalyze this way but unfortunately for me I have autovacuum_max_workers at the default of 3 and there are apparently many tables in line for autovacuuming in front of the table I want :-(. I'm playing whack-a-mole killing them and hoping the table I want will come up.

Note that a manual ANALYZE will *not* clear the pending list, it has to be an autoanalyze.

This is a brain bender, I didn't know there were differences, and this eats away a little bit at my confidence in understand things, but I'll just accept it for now.
 
 (Manual VACUUM will clear the pending list, but you might have trouble getting manual VACUUM to complete fast enough)

You are exactly right the manual VACUUM is taking forever.
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Jeff Janes
In reply to this post by Tom Lane-2
On Fri, Aug 28, 2015 at 10:11 AM, Tom Lane <[hidden email]> wrote:
Steve Kehlet <[hidden email]> writes:
> This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4
> hours) slowness with UPDATEs on a table that has a GIN index on a JSONB
> column. During these episodes, UPDATEs that normally take < 1sec take
> upwards of 2-4 minutes, and all finish simultaneously, like they were all
> blocked on something and finally got released.

Hm ... have you tried checking pg_locks to see if they're blocked on
something identifiable?

You might be right that this is caused by flushing the GIN pending list,
but I thought that that was not supposed to block concurrent insertions.
What I'd expect to see is *one* insert taking significantly longer than
normal, but no effect on concurrent operations. 

If they all have the same work_mem setting, then they all hit the pending list limit at the same time and all of them try to do the clean up.  None of them realize that other clean-ups are in progress until one process gets far enough along to actually start truncating away the pending list pages. At that point the others will soon detect the concurrent clean up and bail out, but the performance damage has already been done.

This isn't just a performance problem, it could also lead to very rare correctness bugs that are being discussed on "hackers" (http://www.postgresql.org/message-id/55D183FF.60307@...).

 
Also, 2-4 minutes sounds
much longer than should be needed to flush a 10MB pending list, anyway.

I've seen it take longer than that for 4MB pending lists, once the gin index doesn't fit in RAM.  10MB is 1280 pages. If that holds 1280 tuples and each tuple has 50 keys, that would be 64,000 keys.  If the keys don't have a lot of internal overlap inserting them would take 6.4 minutes with 10,000 rpm drives, assuming all the non-leaf pages are already in memory.  There is no pre-fetching (not even with multiple backends trying to do the same work at the same time), so a RAID won't speed things up.

Cheers,

Jeff
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Tom Lane-2
In reply to this post by Steve Kehlet
Steve Kehlet <[hidden email]> writes:
> On Fri, Aug 28, 2015 at 10:42 AM Jeff Janes <[hidden email]> wrote:
>> Note that a manual ANALYZE will *not* clear the pending list, it has to be
>> an autoanalyze.

> This is a brain bender, I didn't know there were differences, and this eats
> away a little bit at my confidence in understand things, but I'll just
> accept it for now.

A look at the source code says Jeff's right: either manual or auto VACUUM
will clear the list, and so will auto ANALYZE, but not manual ANALYZE.
Not sure why we did that, but it's clearly intentional.

>> (Manual VACUUM will clear the pending list, but you might have trouble
>> getting manual VACUUM to complete fast enough)

> You are exactly right the manual VACUUM is taking forever.

It looks like a VACUUM will do the cleanup during the first ginbulkdelete
call, so you could probably handle this by running a manual "VACUUM
VERBOSE" with the smallest possible maintenance_work_mem, and canceling it
as soon as you see something reported about the GIN index.

                        regards, tom lane


--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Jeff Janes
In reply to this post by Steve Kehlet
On Fri, Aug 28, 2015 at 11:06 AM, Steve Kehlet <[hidden email]> wrote:
On Fri, Aug 28, 2015 at 10:42 AM Jeff Janes <[hidden email]> wrote:
 
 
Vacuum is overkill (and can be extremely slow to run a large gin index), you just need to get it to autoanalyze by changing the per-table setting of "autovacuum_vacuum_scale_factor" to zero and instead using

Did you mean autovacuum_analyze_scale_factor or does it not matter?


Yes, sorry.  Those long names make my eyes go bleary.

Cheers,

Jeff
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Jeff Janes
In reply to this post by Steve Kehlet
On Fri, Aug 28, 2015 at 11:06 AM, Steve Kehlet <[hidden email]> wrote:
On Fri, Aug 28, 2015 at 10:42 AM Jeff Janes <[hidden email]> wrote:
  
Vacuum is overkill (and can be extremely slow to run a large gin index), you just need to get it to autoanalyze by changing the per-table setting of "autovacuum_vacuum_scale_factor" to zero and instead using

Did you mean autovacuum_analyze_scale_factor or does it not matter?

I'm trying to force an autovacuum/autoanalyze this way but unfortunately for me I have autovacuum_max_workers at the default of 3 and there are apparently many tables in line for autovacuuming in front of the table I want :-(. I'm playing whack-a-mole killing them and hoping the table I want will come up.

I overlooked the significance of this paragraph the first time I responded.  Did you change the system-wide autovacuum_analyze_scale_factor?  If so, don't do that.  You can use a table's storage parameters to set a custom autovacuum_analyze_scale_factor just for individual tables.  So just the table with the troublesome gin index:

alter table some_table_with_gin_index set (autovacuum_analyze_scale_factor =0);
alter table some_table_with_gin_index set (autovacuum_analyze_threshold =100);
 
Killing existing vacuums won't help (at least not until change your system wide setting back and so the above instead) because it will likely just restart on the same table it was on when you killed it.

Cheers,

Jeff
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Steve Kehlet
On Fri, Aug 28, 2015 at 12:10 PM Jeff Janes <[hidden email]> wrote:
Did you change the system-wide autovacuum_analyze_scale_factor?  If so, don't do that.  You can use a table's storage parameters to set a custom autovacuum_analyze_scale_factor just for individual tables.  So just the table with the troublesome gin index:
 
No I did it just to the problematic table:

ALTER TABLE my_table SET (autovacuum_vacuum_scale_factor=0);
ALTER TABLE my_table SET (autovacuum_analyze_scale_factor=0);

My system-wide autovacuum_analyze_threshold is 50 so I think that's ok.

There is a bulk load going on right now so a lot of tables are needing vacuuming. I really need to increase my autovacuum_max_workers.
 
 
Killing existing vacuums won't help (at least not until change your system wide setting back and so the above instead) because it will likely just restart on the same table it was on when you killed it.

You're right, I gave up trying, and am back running a manual VACUUM. It's slow, but I can't bounce the db right now.

Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Steve Kehlet
In reply to this post by Tom Lane-2
On Fri, Aug 28, 2015 at 10:27 AM Tom Lane <[hidden email]> wrote:
Steve Kehlet <[hidden email]> writes:
> Yeah head scratch. That is really weird. Still gathering data, any way I
> can see for sure when these cleanup cycles are occurring?

contrib/pgstattuple has a function that can report the current size of the
pending list for a GIN index.  If you were to capture that every so often,
it'd be pretty easy to confirm or refute whether these episodes are
associated with list cleanup.

Thank you, I found it, this could be very helpful, we'll try it out on some dev boxes.
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Steve Kehlet
In reply to this post by Tom Lane-2
On Fri, Aug 28, 2015 at 11:18 AM Tom Lane <[hidden email]> wrote:
It looks like a VACUUM will do the cleanup during the first ginbulkdelete
call, so you could probably handle this by running a manual "VACUUM
VERBOSE" with the smallest possible maintenance_work_mem, and canceling it
as soon as you see something reported about the GIN index.

Since I'm back to running VACUUM VERBOSE by hand, can you clarify for me how reducing maintenance_work_mem (currently 512MB) will speed it up? Will it work in smaller chunks? So just do something like:

set maintenance_work_mem = '32MB';
VACUUM VERBOSE my_table';

How do I determine the smallest value possible? Just start small (32MB?) and see if it fails, and increase until it doesn't?

Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Jeff Janes
In reply to this post by Steve Kehlet
On Fri, Aug 28, 2015 at 12:25 PM, Steve Kehlet <[hidden email]> wrote:
On Fri, Aug 28, 2015 at 12:10 PM Jeff Janes <[hidden email]> wrote:
Did you change the system-wide autovacuum_analyze_scale_factor?  If so, don't do that.  You can use a table's storage parameters to set a custom autovacuum_analyze_scale_factor just for individual tables.  So just the table with the troublesome gin index:
 
No I did it just to the problematic table:

ALTER TABLE my_table SET (autovacuum_vacuum_scale_factor=0);
ALTER TABLE my_table SET (autovacuum_analyze_scale_factor=0);

You should RESET the autovacuum_vacuum_scale_factor for the table.  You don't want it to be vacuumed aggressively, just autoanalyzed aggressively.  Sorry if my copy-paste error led you astray on that.

 

My system-wide autovacuum_analyze_threshold is 50 so I think that's ok.

autovacuum_analyze_scale_factor is the more important one.  Only when that is close to zero does autovacuum_analyze_threshold matter very much.

 
There is a bulk load going on right now so a lot of tables are needing vacuuming. I really need to increase my autovacuum_max_workers.

But those workers all share the same IO throttling amongst themselves.  Increasing it mostly just gives you more workers all working more slowly.  Assuming your IO subsystem can handle it, you are better off lowering autovacuum_vacuum_cost_delay, which can be done without a server restart (although the change won't take full effect until the existing workers go away and restart).  I also set vacuum_cost_page_hit and vacuum_cost_page_miss to zero and rely exclusively on vacuum_cost_page_dirty to do the throttling.
 
Cheers,

Jeff
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Steve Kehlet
On Fri, Aug 28, 2015 at 1:23 PM Jeff Janes <[hidden email]> wrote:
You should RESET the autovacuum_vacuum_scale_factor for the table.  You don't want it to be vacuumed aggressively, just autoanalyzed aggressively.  Sorry if my copy-paste error led you astray on that.

No problem, done, thank you.

There is a bulk load going on right now so a lot of tables are needing vacuuming. I really need to increase my autovacuum_max_workers.

But those workers all share the same IO throttling amongst themselves.  Increasing it mostly just gives you more workers all working more slowly. 

Ah, you're right, that won't help.
 
Assuming your IO subsystem can handle it, you are better off lowering autovacuum_vacuum_cost_delay, which can be done without a server restart (although the change won't take full effect until the existing workers go away and restart).  I also set vacuum_cost_page_hit and vacuum_cost_page_miss to zero and rely exclusively on vacuum_cost_page_dirty to do the throttling.

Thank you for these great suggestions, I will play with them. 
Reply | Threaded
Open this post in threaded view
|

Re: how to investigate GIN fast updates and cleanup cycles?

Steve Kehlet
I wanted to follow up and report we altered the index in question and turned off fastupdate and we have not seen the issue I originally reported since. Just for the record I did:

ALTER INDEX my_index SET (fastupdate=off);
SET vacuum_cost_delay=0;
VACUUM VERBOSE my_table_with_that_index;

We looked into the performance impact on the inserts/updates and decided it was better for us to take the tiny hit on them and thus avoid these occasional blockages caused by the pending list cleanup cycle.

Thank you Jeff and Tom for the help! I have a number of todos still from this too, including adjusting our autovacuum settings and looking into improving that blocked/blocking query.



On Fri, Aug 28, 2015 at 1:44 PM Steve Kehlet <[hidden email]> wrote:
On Fri, Aug 28, 2015 at 1:23 PM Jeff Janes <[hidden email]> wrote:
You should RESET the autovacuum_vacuum_scale_factor for the table.  You don't want it to be vacuumed aggressively, just autoanalyzed aggressively.  Sorry if my copy-paste error led you astray on that.

No problem, done, thank you.

There is a bulk load going on right now so a lot of tables are needing vacuuming. I really need to increase my autovacuum_max_workers.

But those workers all share the same IO throttling amongst themselves.  Increasing it mostly just gives you more workers all working more slowly. 

Ah, you're right, that won't help.
 
Assuming your IO subsystem can handle it, you are better off lowering autovacuum_vacuum_cost_delay, which can be done without a server restart (although the change won't take full effect until the existing workers go away and restart).  I also set vacuum_cost_page_hit and vacuum_cost_page_miss to zero and rely exclusively on vacuum_cost_page_dirty to do the throttling.

Thank you for these great suggestions, I will play with them. 
Previous Thread Next Thread