Has anyone figured out an reasonable way to dynamically remove segment server instances from greenplum cluster? (ideally without incurring downtime, but with downtime is ok too)
The idea is to have a common pool of additional capacity of segment servers from which to bring additional segment servers into a cluster during peak load situations, and then release them back to the common pool once done. I know that there might be a way to resize up cluster using gpexpand/gpsegrecover utilities (although I haven't validated it, any pointers there are appreciated too).
But what I have not found is a way to release segments from a running cluster.
Some questions that might help to narrow down this - How do we reasonably thinking about the data redistribution? Can we issue a redistribute request to a segment to completely drain it of all data and then decommission it via some command sequence? Can we release one or more segment servers or are we allowed to release the entire set of segment servers running on one physical host (granularity of control)?
Appreciate any helpful pointers that others may have.
Solved! Go to Solution.
There is no official supported way to remove segments dynamically.
You can create a new GPDB cluster and dump/restore the data from old cluster.
If you do need to release the segment instances from one segment server for some reason, the key is to make thoese segment instances down in gp_segment_configuration catalog table.
But this needs EMC support to be engaged since this activity is very critical.