Re: oids and pg_class_oid_index constraint

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: oids and pg_class_oid_index constraint

Aaron Harsh
> Tom Lane <[hidden email]> 08/03/05 1:33 PM >>>
> "Aaron Harsh" <[hidden email]> writes:
> > We've just recently started seeing sporadic constraint violations on system tables.  For example:
> >   duplicate key violates unique constraint "pg_class_oid_index" [for Statement "CREATE TEMPORARY TABLE...
>
> OID wraparound would explain that ...

Fantastic.  Will our plan ('set without oids', pg_dump, pg_restore) take care of the problem?

> > and the occasional
> >   unexpected chunk number 0 (expected 1) for toast value
>
> ... but not that.  The latter might possibly be due to a corrupt index
> on a toast table.  If you have a reproducible way of causing it, I'd
> definitely love to see it.

We've seen the error show up in twice in our serverlog, but I'm not sure what caused the toast tables to get in that state.  Is there anything helpful we could do with the table next time (save the relevant data/base files; run a query against the toast tables)?

Thanks for the advice

--
Aaron Harsh
[hidden email]
503-284-7581 x347


---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq
Reply | Threaded
Open this post in threaded view
|

Re: oids and pg_class_oid_index constraint

Tom Lane-2
"Aaron Harsh" <[hidden email]> writes:
>> Tom Lane <[hidden email]> 08/03/05 1:33 PM >>>
>> OID wraparound would explain that ...

> Fantastic.  Will our plan ('set without oids', pg_dump, pg_restore) take care of the problem?

Only temporarily (ie, till the counter wraps around again).  If you can
reduce your consumption of OIDs enough that that's longer than your next
planned Postgres upgrade, maybe it's enough.

I'm a bit surprised that you are seeing the problem often enough to be
worried about it.  In a normal database with say less than a thousand
tables, the odds against a collision with an existing OID ought to be
several million to one.  Of course this analysis is too simplistic,
since the existing OIDs are probably not randomly scattered --- usually
they'll be tightly clumped at the point where you set up the database
and created all your persistent tables.  You might try looking at the
distribution of OIDs in your catalogs; it could be that you are going
through a period where the odds of collision are much more than
millions-to-one, but once you got past the range of OIDs initially
assigned, it'd drop down to much less than that.  If so, you don't have
to passively wait for that to happen --- you can force the nextOID
counter up past the "dense" range of OIDs (see pg_resetxlog).

>>> and the occasional
>>> unexpected chunk number 0 (expected 1) for toast value
>>
>> ... but not that.  The latter might possibly be due to a corrupt index
>> on a toast table.  If you have a reproducible way of causing it, I'd
>> definitely love to see it.

> We've seen the error show up in twice in our serverlog, but I'm not
> sure what caused the toast tables to get in that state.  Is there
> anything helpful we could do with the table next time (save the
> relevant data/base files; run a query against the toast tables)?

You could try something like
        select chunk_seq from pg_toast.pg_toast_NNN where chunk_id = XXXX
and see if you get the same results from both seqscan and indexscan
plans (use enable_seqscan and enable_indexscan to force it to be done
both ways).  Also see if reindexing the toast table makes the error go
away.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [hidden email] so that your
       message can get through to the mailing list cleanly