First you may need to identify a criteria by which to partition your data, maybe by record population timestamp binned into yyyymm resolution.
One a test environment, you may construct a table similar to the one currently having the many records but with a different name.
Then write code which will construct the child tables, run the sql to construct the child tables.
If your child tables are based on the record date, you may construct child tables for the timestamps of the record you have as well as several months or timepoints in the future. Maybe have a cron job to construct new tables of future timepoints.
Then write code to populate the new tables directly in piecemeal by the use of WHERE clause with data from the current production table.
You may choose to write the above code to use "COPY" or "INSERT" to populate the specific partition table.
Clone your current application and modify the code such that it inserts directly to the specific child table or leave the writing to the specific child table to be done by the on insert trigger.
Test the data population via the application to see if the child tables are being populated accordingly.
If all is well. Schedule downtime where you can implement these changes to your production environment.
On Thu, Oct 8, 2020, 1:16 PM Jean-Marc Lessard <[hidden email]> wrote:
I have a large table (billions of records) which has not been vacuum and bloated.
Vacuum scale factor was left at the default.
I ran a vacuum on a DEV system and it makes several passes (scanning heap/vacuuming indexes/vacumming heap) which take more than an hour each.
On a PROD system, I may have to kill the job midway.
Should I reduce the autovacuum_work_mem of my session? Currently 1GB
Increase if you can. You want to maximize the work being done before it needs to pause.
Have you tuned any settings related to vacuum? If your I/O system can handle it, turning cost delay very low, especially PG12+ where it can be less than 1ms, may be helpful. Otherwise you might reduce to 1ms and also increase cost limit so you do more work before stopping.