Cleaning Up Postgres Database to Improve Performance

Overview

Symptoms like

  • analyzer reports taking too long to complete,
  • report execution being impacted by long-running DELETE statements in Postgres,
  • users unable to login to Postgres due to random connectivity issues between Postgres and the Analyzer,
  • Postgres crashing unexpectedly, or not responding

can point to poor database health.

In this article, you will learn steps to optimize database health and see if this resolves database performance issues.

 

Solution

Large cached and orphaned files can bog down the performance of the Postgres database and need to be cleaned up to free space and regain database stability.

*Note: Contact Sensage support if you have any questions and require a screen-sharing session with a support agent to do the maintenance procedures.

 

Dropping large orphaned result tables

Check the Postgres status to verify the size of the biggest tables in Postgres by issuing the following query:

SELECT nspname || '.' || relname AS "relation",
pg_size_pretty(pg_relation_size(C.oid)) AS "size"
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_relation_size(C.oid) DESC
LIMIT 20;

This gives a result similar to this

  relation | size
  --------------------------------------+---------
  query_result.query_11387 | 20 GB
  query_result.query_11222 | 5101 MB
  query_result.query_11387_seq_num_idx | 5087 MB
  query_result.query_11387_task_id_idx | 4177 MB
  query_result.query_11216 | 1670 MB
  query_result.query_11219 | 788 MB
  query_result.query_11222_seq_num_idx | 310 MB
  query_result.query_11222_task_id_idx | 267 MB
  query_result.query_11321 | 198 MB
  pg_toast.pg_toast_17089 | 190 MB
  query_result.query_11216_seq_num_idx | 138 MB
  query_result.query_11216_task_id_idx | 127 MB
  query_result.query_11219_task_id_idx | 54 MB
  query_result.query_11219_seq_num_idx | 54 MB
  query_result.query_11399 | 33 MB
  query_result.query_11264 | 29 MB
  query_result.query_410_seq_num_idx | 28 MB
  query_result.query_410_task_id_idx | 27 MB
  pg_toast.pg_toast_17089_index | 21 MB
  query_result.query_11321_task_id_idx | 18 MB
  (20 rows)

If there are some really large cached result tables that could be removed, check what reports these correspond to. This will allow you to see if they are orphaned cached results you can safely delete or if these are just tables that need a FULL VACUUM to decrease their size and reclaim disk space which would help regain database stability.

select * from ss_report where id in (select report_id from ss_report_cache_entry where table_name = 'query_11387');

The orphaned tables can be dropped with this attached script.

 

Cleaning semaphore arrays

The semaphore arrays are cleared using a bash script included in Sensage 5.0.1 installation directory (clear_semaphores.sh):

-bash-4.1# pwd
/root/latest_hotpatches
-bash-4.1# ls -lh clear_semaphores.sh
-rwxr-xr-x. 1 505 505 1.4K Jul 23  2014 clear_semaphores.sh
-bash-4.1# ./clear_semaphores.sh --prefix /opt/sensage
Found orphaned semaphore arrays! Clearing...
Thu Mar 4 12:01:03 -05 2021 all done!
-bash-4.1# ./clear_semaphores.sh --prefix /opt/sensage
No orphaned semaphore arrays found.

This is mostly for SLS, but could also be valid for other processes as Postgres/Analyzer.

 

Full backup and reinstall

As a last resort measure, you may need to perform a full pg_dump backup and reinstall the atpgsql database. The relevant article for that is linked in the Information section below.

 

Related Articles

 

 

Comments

0 comments

Please sign in to leave a comment.