Hi all I've a setup that is now composed by 1 Master Node, 3 Data Node (2 added after the installation) and 2 remote collectors (one will soon become a Master Replica).
When I created the VM I added a 250 GB disk so that the total storage for vm is 500 GB each .
At this moment it seems that the disk space used in the /storage/db is really different between the nodes and the master node is becoming full really fast.
Master: /dev/mapper/data-db 463642mb 379658mb 60436mb 87% /storage/db
Data: /dev/mapper/data-db 463642mb 109511mb 330583mb 25% /storage/db
Data: /dev/mapper/data-db 463642mb 220762mb 219329mb 51% /storage/db
Data: /dev/mapper/data-db 463642mb 144517mb 295573mb 33% /storage/db
On the storage db I've moved also the logs because the log partition was getting full and searched for heapstack to delete but there's nothing.
On the master node this is the space distribution:
2.8G activity
136K alarmxdblog
98M blob
4.1M config_backup
299G data
4.0K heapdump
42G hisxdb
361M hisxdblog
19G rollup
6.8G vpostgres
589M xdb
339M xdblog
Another Data node:
2.8G activity
356K alarmxdb
136K alarmxdblog
4.0K blob
4.1M config_backup
117G data
17M heapdump
8.0G hisxdb
28M hisxdblog
7.2G rollup
5.0G vpostgres
876K xdb
348K xdblog
The /storage/db/vcops/data directory is the real difference between for spaces in the nodes.
I've tried a disk space rebalance that had run for almost 30 hours but this is the result I had at the end.
Is this normal and I should expand more the storage/data disks of all nodes or there's something I can do to rebalance the disk space better?
Thanks
Francescoo