Re: ERROR: too many dynamic shared memory segments
On Wed, Sep 11, 2019 at 11:20 PM Nicola Contu <[hidden email]> wrote:
> If the error persist I will try to revert the work_mem.
> Thanks a lot
It's hard to say exactly what the cause of the problem is in your case
and how to avoid it, without knowing what your query plans look like.
PostgreSQL allows 64 + 2 * max_connections segments to exist a time,
and it needs a number of them that depends on work_mem (in the case of
Parallel Hash Join and Parallel Bitmap Index Scan), and also depends
on the number of Gather nodes that appear in the plan, which in some
unusual cases can result from partitioning.
I've seen people reaching this error by running a lot of parallel
queries concurrently. If that's the cause, then you can definitely
get some relief by turning work_mem down, or by turning
max_connections up (even though you don't want to allow more
connections -- because it influences the formula for deciding on the
DSM segment limit). We should probably adjust some of the internal
constants to give us more slots, to avoid that problem, as discussed
I've also seen people reaching this error by somehow coming up with
plans that had a very large number of Gather nodes in them,
corresponding to partitions; that's probably a bad plan (it'd
presumably be better to terminate parallelism higher up in the plan,
but these plans do seem to exist in the wild; I don't recall exactly
why). I think we need a bit of a redesign so that if there are
multiple Gather nodes, they share the same main DSM segment, instead
of blowing through this limit.