WAL directory size calculation

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
26 messages Options
12
Reply | Threaded
Open this post in threaded view
|

WAL directory size calculation

Moreno Andreo
Hi folks! :-)
I'm about to bring up my brand new production server and I was wondering
if it's possible to calculate (approx.) the WAL directory size.
I have to choose what's better in terms of cost vs. performance (we are
on Google Cloud Platform) between a ramdisk or a separate persistent
disk. Obviously ramdisk will be times faster disk, but having a, say,
512 GB ramdisk will be a little too expensive :-)
I've read somewhere that the formula should be 16 MB * 3 *
checkpoint_segment in size. But won't it be different depending on the
type of /wal_level/ we set? And won't it also be based on the volume of
transactions in the cluster?
And, in place of not-anymore-used-in-9.5 /checkpoint_segment/ what
should I use? /max_wal_size/?
Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and
plan to use wal_level=archive because I plan to have a backup server
with barman.

Using the above formula I have:
     16 MB * 3 * 1 GB
that leads to to ... uh .. 48000 TB?

Any ideas?
Thanks
Moreno.-



--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: WAL directory size calculation

David G Johnston
On Thu, Jul 28, 2016 at 9:25 AM, Moreno Andreo <[hidden email]> wrote:
I've read somewhere that the formula should be 16 MB * 3 * checkpoint_segment in size.
​[...]​
 
Using the above formula I have:
    16 MB * 3 * 1 GB
that leads to to ... uh .. 48000 TB?

​You seem to be mis-remembering the formula.​

​checkpoiint_segments is a quantity (i.e., unit-less), not a size.  Saying its "1GB" makes no sense.

​I'm also doubting you multiply it by 3 - add three to it maybe...

​Each segment takes 16MB.  The total space required is that times whatever maximum count of segments you expect to have.​

I'd suggest waiting for better responses on these lists or searching out prior conversations on this topic - of which I am sure there are many.

David J.

Reply | Threaded
Open this post in threaded view
|

Re: WAL directory size calculation

Andreas Kretschmer-3
In reply to this post by Moreno Andreo
Am 28.07.2016 um 15:25 schrieb Moreno Andreo:
> Hi folks! :-)
> I'm about to bring up my brand new production server and I was
> wondering if it's possible to calculate (approx.) the WAL directory size.
> I have to choose what's better in terms of cost vs. performance (we
> are on Google Cloud Platform) between a ramdisk or a separate
> persistent disk. Obviously ramdisk will be times faster disk, but
> having a, say, 512 GB ramdisk will be a little too expensive :-)
>

Don't use a RAM-Disk for WAL! DON'T! NEVER!

You really need only some GB, if you are using Replication. Without
Replication 1 GB would be fine, even with replication. But it must be
realible!

Andreas


--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: WAL directory size calculation

Moreno Andreo
Il 28/07/2016 15:54, Andreas Kretschmer ha scritto:

> Am 28.07.2016 um 15:25 schrieb Moreno Andreo:
>> Hi folks! :-)
>> I'm about to bring up my brand new production server and I was
>> wondering if it's possible to calculate (approx.) the WAL directory
>> size.
>> I have to choose what's better in terms of cost vs. performance (we
>> are on Google Cloud Platform) between a ramdisk or a separate
>> persistent disk. Obviously ramdisk will be times faster disk, but
>> having a, say, 512 GB ramdisk will be a little too expensive :-)
>>
>
> Don't use a RAM-Disk for WAL! DON'T! NEVER!
OK OK OK, got the point...... :-)

>
> You really need only some GB, if you are using Replication. Without
> Replication 1 GB would be fine, even with replication. But it must be
> realible!
I'll get what seems to be the minumum persistent disk in size among GCP,
10 GB. Thanks!

>
> Andreas
>
>




--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: WAL directory size calculation

Moreno Andreo
In reply to this post by David G Johnston
Il 28/07/2016 15:33, David G. Johnston ha scritto:
On Thu, Jul 28, 2016 at 9:25 AM, Moreno Andreo <[hidden email]> wrote:
I've read somewhere that the formula should be 16 MB * 3 * checkpoint_segment in size.
​[...]​
 
Using the above formula I have:
    16 MB * 3 * 1 GB
that leads to to ... uh .. 48000 TB?

​You seem to be mis-remembering the formula.​
The result is too big to be sane... so there *must* be something wrong

​checkpoiint_segments is a quantity (i.e., unit-less), not a size.  Saying its "1GB" makes no sense.
Yes, my "1 GB" is related to max_wal_file_size, that from 9.5 came over checkpoint_segment.

​I'm also doubting you multiply it by 3 - add three to it maybe...

​Each segment takes 16MB.  The total space required is that times whatever maximum count of segments you expect to have.​
That's the variable. How many segments do I expect to have? (jumping from 9.1 (current) to 9.5 (new) documentation and having a bit of confusion in my head...)


I'd suggest waiting for better responses on these lists ...[snip]...
Got one, in fact. The statement to *absolutely* not use ramdisk with wal files resolved almost all of my doubts.


David J.

Thanks

Moreno

Reply | Threaded
Open this post in threaded view
|

Re: WAL directory size calculation

Francisco Olarte
In reply to this post by Moreno Andreo
On Thu, Jul 28, 2016 at 3:25 PM, Moreno Andreo <[hidden email]> wrote:
> Obviously ramdisk will be times faster disk, but having a, say, 512 GB
> ramdisk will be a little too expensive :-)

Besides defeating the purpose of WAL, if you are going to use non
persistent storage for WAL you could as well use minimal level,
fsync=off and friends.

> Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
> to use wal_level=archive because I plan to have a backup server with barman.

Is this why you plan using RAM for WAL ( assuming fast copies to the
archive and relying on it for recovery ) ?

Francisco Olarte.


--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: WAL directory size calculation

Jan Wieck-3
In reply to this post by Andreas Kretschmer-3


On Thu, Jul 28, 2016 at 9:54 AM, Andreas Kretschmer <[hidden email]> wrote:
Without Replication 1 GB would be fine, even with replication. But it must be realible!


The required size of WAL depends on what your intended checkpoint_timeout vs. the amount
of WAL generated from data turnover is. A rather small 40GB database, churning TPC-C style
transactions at a rate of 1,000 TPS can easily generate 60MB of WAL per second (if configured
wrong). To keep the WAL size at or below 1GB would require a checkpoint to complete every
17 seconds. In this case, max_wal_size=1GB is a very wrong config option.

One problem here is that the more frequent checkpoints occur, the more full page writes will be
required. Which drives up the amount of WAL, requiring checkpoints even more frequently
when max_wal_size is the limiting factor. This is a classic "down spiral" scenario.

At 1,000 TPS, the above benchmark levels out (after about 1-2 hours) around 60-64GB of
WAL space used (with max_wal_size = 96GB and checkpoint_timeout=20min). The total
amount of WAL actually produced goes down significantly (due to reduced full page writes)
and the transaction response time improves in average as well as in stddev. The whole DB
looks more like it is cruising, than fighting.

This example isn't a big database (40-80GB) or anything exotic. Just a write heavy OLTP
load. 


Regards, Jan





 
Andreas



--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Jan Wieck
Senior Postgres Architect
Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

Moreno Andreo
In reply to this post by Francisco Olarte
Il 28/07/2016 20:45, Francisco Olarte ha scritto:
> On Thu, Jul 28, 2016 at 3:25 PM, Moreno Andreo <[hidden email]> wrote:
>> Obviously ramdisk will be times faster disk, but having a, say, 512 GB
>> ramdisk will be a little too expensive :-)
> Besides defeating the purpose of WAL, if you are going to use non
> persistent storage for WAL you could as well use minimal level,
> fsync=off and friends.
After Andreas post and thinking about it a while, I went to the decision
that it's better not to use RAM but another persistent disk, because
there can be an instant between when a WAL is written and it's fsync'ed,
and if a failure happens in this instant the amount of data not fsync'ed
is lost. Am I right?
>
>> Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
>> to use wal_level=archive because I plan to have a backup server with barman.
> Is this why you plan using RAM for WAL ( assuming fast copies to the
> archive and relying on it for recovery ) ?
Yes, but having to deal with the risk of having loss of data, I think
I'll go on a bigger persistent disk, have bigger checkpoint intervals
and end up having a longer rescue time, but the main thing is *no data loss*
>
> Francisco Olarte.
>
>
Thanks

Moreno.




--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

John R Pierce

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

With that many databases with that so many objectsand undoubtable client connections, I'd want to spread that across a cluster of smaller servers.

just sayin...


-- 
john r pierce, recycling bits in santa cruz
Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

Moreno Andreo
Il 29/07/2016 10:43, John R Pierce ha scritto:

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.

and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...

I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.

just sayin...

ideas are always precious and welcome.


-- 
john r pierce, recycling bits in santa cruz


Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

FarjadFarid(ChkNet)

 

The question to ask is what benefit would you gain by saving BLOB object on a database than on say a flat file server or url on an ftp server? Specially larger ones.

 

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand their structure and process them. It is not worth the effort.

It can hit the DB performance in Indexing, backups, migrations and load balancing.  

 

Hope this helps.

 

 

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: [hidden email]
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.


and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...


I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.


just sayin...

ideas are always precious and welcome.

 

-- 
john r pierce, recycling bits in santa cruz

 

Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

FarjadFarid(ChkNet)
In reply to this post by Moreno Andreo

Another option which is growing in popularity is distributed in memory cache. There are quite a few companies providing such technology.

 

Pricing can range from free to quite expensive.

 

One recommendation with these technologies is to test them under heavy load conditions.

 

Good luck.

 

 

 

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: [hidden email]
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.


and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...


I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.


just sayin...

ideas are always precious and welcome.

 

-- 
john r pierce, recycling bits in santa cruz

 

Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

Moreno Andreo
In reply to this post by FarjadFarid(ChkNet)
Il 29/07/2016 11:44, FarjadFarid(ChkNet) ha scritto:

 

The question to ask is what benefit would you gain by saving BLOB object on a database than on say a flat file server or url on an ftp server? Specially larger ones.

Privacy. Blobs are stored encrypted, since they are health-related images or documents.
You should be right if all of this data would be resident only on our server (that can only be accessed by application), but every user has a small PG cluster in his PC with his patients data and images that replicates continuously with our server.
Our application runs on Windows. To get into patient data from another user (say, someone that stole the computer) is a bit tricky, because you have to know how to exclude authentication in postgres and even after this, you have to know where to search and what to search and sometines what is the meaning on the encodings.
Imagine if we have a folder containing all images.... double click and open...

Another point is a bit of self-defense. Our users are far to be smart computer users, and in the past we had some cases in which someone, trying to clean up a filled-up disk, deleted a directory under his Paradox database (!!!) and then asked us why the app was not loading anymore....
 

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand their structure and process them. It is not worth the effort.

It can hit the DB performance in Indexing, backups, migrations and load balancing.

Regarding backups I disagree. Files related to database must be consistent to the database itself, so backup must be done saving both database and images. AFAIK there's not a big difference in backing up image files versus BLOBS in a database.
I agree about load balancing, but only in case of a bulk load of several megabytes. (our actual server got an overload 2 months ago when a client we were activating sent a transaction with the insertion of 50 blobs sizing about 300 megabytes)

 

 

Hope this helps.

 

 

 

From: [hidden email] [[hidden email]] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: [hidden email]
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.


and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...


I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.


just sayin...

ideas are always precious and welcome.

 

-- 
john r pierce, recycling bits in santa cruz

 


Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

FarjadFarid(ChkNet)

 

Actually you can increase the over-all performance of your system several fold by distributing the source of data with encryption. CDN services use this old technique consistently all the time.

If you add a URL to an ftp with SSL certificate. Your backup will be much quicker and if someone stole the computer the images are still encrypted as before. It is just the source where data comes from that changes.  

 

Of course for small amount of data, say encrypted user name, password or id credential, db engine is still the best. But for larger files you could benefit substantially by looking at hybrid solutions.

 

Check out companies like www.maytech.com, not related to me at all. But they have secure network used for NHS (UK).

 

Their ftp service does have user name password protection which could be customised for different customer. They also distributed servers around the world.

 

Hope this helps.

 

From: Moreno Andreo [mailto:[hidden email]]
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) <[hidden email]>; [hidden email]
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 11:44, FarjadFarid(ChkNet) ha scritto:

 

The question to ask is what benefit would you gain by saving BLOB object on a database than on say a flat file server or url on an ftp server? Specially larger ones.

Privacy. Blobs are stored encrypted, since they are health-related images or documents.
You should be right if all of this data would be resident only on our server (that can only be accessed by application), but every user has a small PG cluster in his PC with his patients data and images that replicates continuously with our server.
Our application runs on Windows. To get into patient data from another user (say, someone that stole the computer) is a bit tricky, because you have to know how to exclude authentication in postgres and even after this, you have to know where to search and what to search and sometines what is the meaning on the encodings.
Imagine if we have a folder containing all images.... double click and open...

Another point is a bit of self-defense. Our users are far to be smart computer users, and in the past we had some cases in which someone, trying to clean up a filled-up disk, deleted a directory under his Paradox database (!!!) and then asked us why the app was not loading anymore....
 

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand their structure and process them. It is not worth the effort.

It can hit the DB performance in Indexing, backups, migrations and load balancing.

Regarding backups I disagree. Files related to database must be consistent to the database itself, so backup must be done saving both database and images. AFAIK there's not a big difference in backing up image files versus BLOBS in a database.
I agree about load balancing, but only in case of a bulk load of several megabytes. (our actual server got an overload 2 months ago when a client we were activating sent a transaction with the insertion of 50 blobs sizing about 300 megabytes)

 

 

Hope this helps.

 

 

 

From: [hidden email] [[hidden email]] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: [hidden email]
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.



and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...



I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.



just sayin...

ideas are always precious and welcome.


 

-- 
john r pierce, recycling bits in santa cruz

 

 

Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

FarjadFarid(ChkNet)
In reply to this post by Moreno Andreo

Sorry the URL should have been https://www.maytech.net/

 

Of course there are other companies in this space.

 

From: Moreno Andreo [mailto:[hidden email]]
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) <[hidden email]>; [hidden email]
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 11:44, FarjadFarid(ChkNet) ha scritto:

 

The question to ask is what benefit would you gain by saving BLOB object on a database than on say a flat file server or url on an ftp server? Specially larger ones.

Privacy. Blobs are stored encrypted, since they are health-related images or documents.
You should be right if all of this data would be resident only on our server (that can only be accessed by application), but every user has a small PG cluster in his PC with his patients data and images that replicates continuously with our server.
Our application runs on Windows. To get into patient data from another user (say, someone that stole the computer) is a bit tricky, because you have to know how to exclude authentication in postgres and even after this, you have to know where to search and what to search and sometines what is the meaning on the encodings.
Imagine if we have a folder containing all images.... double click and open...

Another point is a bit of self-defense. Our users are far to be smart computer users, and in the past we had some cases in which someone, trying to clean up a filled-up disk, deleted a directory under his Paradox database (!!!) and then asked us why the app was not loading anymore....
 

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand their structure and process them. It is not worth the effort.

It can hit the DB performance in Indexing, backups, migrations and load balancing.

Regarding backups I disagree. Files related to database must be consistent to the database itself, so backup must be done saving both database and images. AFAIK there's not a big difference in backing up image files versus BLOBS in a database.
I agree about load balancing, but only in case of a bulk load of several megabytes. (our actual server got an overload 2 months ago when a client we were activating sent a transaction with the insertion of 50 blobs sizing about 300 megabytes)

 

 

Hope this helps.

 

 

 

From: [hidden email] [[hidden email]] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: [hidden email]
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.



and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...



I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.



just sayin...

ideas are always precious and welcome.


 

-- 
john r pierce, recycling bits in santa cruz

 

 

Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

Moreno Andreo
In reply to this post by FarjadFarid(ChkNet)
Il 29/07/2016 15:13, FarjadFarid(ChkNet) ha scritto:

 

If you add a URL to an ftp with SSL certificate. Your backup will be much quicker and if someone stole the computer the images are still encrypted as before. It is just the source where data comes from that changes. 

... and if while working the Internet connection drops....? or my office is not covered by broadband at all (and, still in 2016, in Italy there are so many places not covered by broadband... no adsl, no wi-max, low-performing mobile)?
Local copies of databases that we synchronize are made to permit users to work even if no internet connection is available (since they're doctors, they have to have their data available almost all the time)

This architecture is made by design. Some years ago, when we started designing our software, we went into this situation, and the question was "Why don't we have just a remote server and users connect remotely, instead of having replicas in their places?"
This can ease updates, troubleshooting, almost everything.
After a while, the answer we went into is exactly as above. Until we have slow and unreliable Internet connections (fiber coverage is growing, but it's still very sparse) so we can't count on them, we can't rely only to a remote server.

Thanks
Moreno

 

 

From: Moreno Andreo [[hidden email]]
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) [hidden email]; [hidden email]
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 11:44, FarjadFarid(ChkNet) ha scritto:

 

The question to ask is what benefit would you gain by saving BLOB object on a database than on say a flat file server or url on an ftp server? Specially larger ones.

Privacy. Blobs are stored encrypted, since they are health-related images or documents.
You should be right if all of this data would be resident only on our server (that can only be accessed by application), but every user has a small PG cluster in his PC with his patients data and images that replicates continuously with our server.
Our application runs on Windows. To get into patient data from another user (say, someone that stole the computer) is a bit tricky, because you have to know how to exclude authentication in postgres and even after this, you have to know where to search and what to search and sometines what is the meaning on the encodings.
Imagine if we have a folder containing all images.... double click and open...

Another point is a bit of self-defense. Our users are far to be smart computer users, and in the past we had some cases in which someone, trying to clean up a filled-up disk, deleted a directory under his Paradox database (!!!) and then asked us why the app was not loading anymore....
 

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand their structure and process them. It is not worth the effort.

It can hit the DB performance in Indexing, backups, migrations and load balancing.

Regarding backups I disagree. Files related to database must be consistent to the database itself, so backup must be done saving both database and images. AFAIK there's not a big difference in backing up image files versus BLOBS in a database.
I agree about load balancing, but only in case of a bulk load of several megabytes. (our actual server got an overload 2 months ago when a client we were activating sent a transaction with the insertion of 50 blobs sizing about 300 megabytes)

 

 

Hope this helps.

 

 

 

From: [hidden email] [[hidden email]] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: [hidden email]
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of precision.
With extensive use of BLOBs.



and undoubtable client connections,

Yes, that's another big problem... we run normally between 500 and 700 concurrent connections... I had to set max_connections=1000, the whole thing grew up faster than we were prepared for...



I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some months, we have to plan migration with users) I'll test putting another one or two machines in cluster, make some test cases, and when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one solution would be to set the servers to be master on some databases and slave on others, so we can have a better load balancing (instead of having all writes on the sole master, we split among all masters depending on which database is getting the write command, especially when having to write BLOBs that can be some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.



just sayin...

ideas are always precious and welcome.


 

-- 
john r pierce, recycling bits in santa cruz

 

 


Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

David G Johnston
In reply to this post by Moreno Andreo
On Fri, Jul 29, 2016 at 7:08 AM, Moreno Andreo <[hidden email]> wrote:
​R​
egarding backups I disagree. Files related to database must be consistent to the database itself, so backup must be done saving both database and images. 

​I'd suggest you consider that such binary data be defined as immutable.  Then the only problem you have to worry about is existence - versioning consistency goes away.  You only need focus on the versioning of associations - which remains in the database and is very lightweight.  It is then a separate matter to ensure that all documents you require are accessible given the identifying information stored in the database and linked to the primary records via those versioned associations.

David J.
 
Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

Francisco Olarte
In reply to this post by Moreno Andreo
Hi:

On Fri, Jul 29, 2016 at 10:35 AM, Moreno Andreo
<[hidden email]> wrote:
> After Andreas post and thinking about it a while, I went to the decision
> that it's better not to use RAM but another persistent disk, because there
> can be an instant between when a WAL is written and it's fsync'ed, and if a
> failure happens in this instant the amount of data not fsync'ed is lost. Am
> I right?

With the usual configuration, fsync on, etc.. what postgres does is to
write and sync THE WAL before commit, but it does not sync the table
pages. Should anything bad (tm) happen it can replay the synced wal to
recover. If you use a ram disk for WAL and have a large enough ram
cache you can lose a lot of data, not just from the last sync. At the
worst point you could start a transaction, create a database, fill it
and commit and have everything in the ram-wal and the hd cache, then
crash and have nothing on reboot.

Francisco Olarte.


--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

Moreno Andreo
In reply to this post by David G Johnston
Il 29/07/2016 15:30, David G. Johnston ha scritto:
On Fri, Jul 29, 2016 at 7:08 AM, Moreno Andreo <[hidden email]> wrote:
​R​
egarding backups I disagree. Files related to database must be consistent to the database itself, so backup must be done saving both database and images. 

​I'd suggest you consider that such binary data be defined as immutable.  Then the only problem you have to worry about is existence - versioning consistency goes away.  You only need focus on the versioning of associations - which remains in the database and is very lightweight.  It is then a separate matter to ensure that all documents you require are accessible given the identifying information stored in the database and linked to the primary records via those versioned associations.

David J.
 

I think you are right on this point, there are only some kind of bytea that are not immutable, and that's where we store bytea instead of images (many of these fields have been already converted to text type, though)

Moreno

Reply | Threaded
Open this post in threaded view
|

Re: [SPAM] Re: [SPAM] Re: [GENERAL] WAL directory size calculation

Moreno Andreo
In reply to this post by Francisco Olarte
Il 29/07/2016 17:26, Francisco Olarte ha scritto:

> Hi:
>
> On Fri, Jul 29, 2016 at 10:35 AM, Moreno Andreo
> <[hidden email]> wrote:
>> After Andreas post and thinking about it a while, I went to the decision
>> that it's better not to use RAM but another persistent disk, because there
>> can be an instant between when a WAL is written and it's fsync'ed, and if a
>> failure happens in this instant the amount of data not fsync'ed is lost. Am
>> I right?
> With the usual configuration, fsync on, etc.. what postgres does is to
> write and sync THE WAL before commit, but it does not sync the table
> pages. Should anything bad (tm) happen it can replay the synced wal to
> recover. If you use a ram disk for WAL and have a large enough ram
> cache you can lose a lot of data, not just from the last sync. At the
> worst point you could start a transaction, create a database, fill it
> and commit and have everything in the ram-wal and the hd cache, then
> crash and have nothing on reboot.
>
> Francisco Olarte.
>
>
This is another good point.

I'm ending up with a 10 GB SSD dedicated to WAL files. I'm starting with
a small slice of my clients for now, to test production environment, and
as traffic will grow, I'll see if my choice was good or has to be improved.

Should I keep fsync off? I'd think it would be better leaving it on, right?





--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
12