Performance Optimisation - Identifying the correct DB

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Performance Optimisation - Identifying the correct DB

Renjith Gk
Hello Friends,

I am seeking a better suggestion for Identifying the correct DB as part of READ operation (fetching queries).

What is the optimal execution time for Reading 200k records in Postgres. We had issues for reading records in cassandra which faced time out for ~200K records.

any ideal solution or recommendation towards Postgres.

Best regards,
Renjith
Reply | Threaded
Open this post in threaded view
|

RE: Performance Optimisation - Identifying the correct DB

Eshara Mondal

Hello,

 

As I understand you should not have any issues with reading 200K records in postgres.  Unless your tables are being heavily written to at the same time or over indexed (or not indexed at all), you could also check how much your tables have ‘bloated’ to see whether or not you need to run a vacuum on it. 

 

SELECT schemaname, relname, n_live_tup, n_dead_tup, trunc(100*n_dead_tup/(n_live_tup+1))::float "ratio%",

to_char(last_autovacuum, 'YYYY-MM-DD HH24:MI:SS') as autovacuum_date,

to_char(last_autoanalyze, 'YYYY-MM-DD HH24:MI:SS') as autoanalyze_date

FROM pg_stat_all_tables

ORDER BY last_autovacuum;

 

This can help by showing you how many dead tuples you have laying around on your tables.  Also shows you the last time your tables were analyzed and autovacuumed by the vacuum daemon.

 

Hope that helps a bit,

Eshara

 

 

From: Renjith Gk <[hidden email]>
Sent: Tuesday, April 23, 2019 9:07 AM
To: [hidden email]
Subject: Performance Optimisation - Identifying the correct DB

 

Hello Friends,

 

I am seeking a better suggestion for Identifying the correct DB as part of READ operation (fetching queries).

 

What is the optimal execution time for Reading 200k records in Postgres. We had issues for reading records in cassandra which faced time out for ~200K records.

 

any ideal solution or recommendation towards Postgres.

 

Best regards,

Renjith

Reply | Threaded
Open this post in threaded view
|

Re: Performance Optimisation - Identifying the correct DB

Ravi Krishna-5
In reply to this post by Renjith Gk

What is the optimal execution time for Reading 200k records in Postgres. We had issues for reading records in cassandra which faced time out for ~200K records.

any ideal solution or recommendation towards Postgres.

Cassandra is highly optimized for key based readings only. 

Without knowing your application it is hard to predict how long will it take to read 200K records. 

Does the query to fetch 200K records use index. 
Does it fetch a small portion or a large portion of total rows. If later, then most likely PG will do table scan.  
How wide is the table.  
What is the timeout setting?