PowerProtect: Data Manager ESDB - Data too large, data for activity
Summary: The PowerProtect Data Manager activities stopped due to documents increase with Elasticsearch database. The errors found were: Data too large, data for [_id] would be [859222328/819.4mb], which is larger than the limit of [858993459/819.1mb]" ...
Symptoms
PowerProtect Data Manager activities like: Backups, Restore, or Replication processes are affected due to the problem with Elasticsearch.
-
The following errors are seen in the Elasticsearch index_activity indice:
"reason": Object { "caused_by": Object { "caused_by": Object { "bytes_limit": Number(858993459), "bytes_wanted": Number(859222328), "durability": String("PERMANENT"), "reason": String("[fielddata] Data too large, data for [_id] would be [859222328/819.4mb], which is larger than the limit of [858993459/819.1mb]"), "type": String("circuit_breaking_exception"), }, "reason": String("CircuitBreakingException[[fielddata] Data too large, data for [_id] would be [859222328/819.4mb], which is larger than the limit of [858993459/819.1mb]]"), "type": String("execution_exception"), }, "reason": String("java.util.concurrent.ExecutionException: CircuitBreakingException[[fielddata] Data too large, data for [_id] would be [859222328/819.4mb], which is larger than the limit of [858993459/819.1mb]]"), "type": String("exception"), }, "shard": Number(0), }, -
The Elasticsearch logs indicate the following:
org.elasticsearch.transport.RemoteTransportException: [local][127.0.0.1:14400][indices:data/read/search[phase/query]] Caused by: org.elasticsearch.search.query.QueryPhaseExecutionException: Query Failed [Failed to execute main query] at org.elasticsearch.search.query.QueryPhase.executeInternal(QueryPhase.java:228) ~[elasticsearch-7.17.17.jar:7.17.17] at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:98) ~[elasticsearch-7.17.17.jar:7.17.17] at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:459) ~[elasticsearch-7.17.17.jar:7.17.17] at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:623) ~[elasticsearch-7.17.17.jar:7.17.17] at org.elasticsearch.search.SearchService.lambda$executeQueryPhase$2(SearchService.java:484) ~[elasticsearch-7.17.17.jar:7.17.17] at org.elasticsearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:47) [elasticsearch-7.17.17.jar:7.17.17] at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:62) ~[elasticsearch-7.17.17.jar:7.17.17] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.17.jar:7.17.17] -
Elasticsearch size of index_activity indice was found as 21.6 GB
┌─────┬────────┬────────┬───────────────────────────┬────────────────────────┬─────┬─────┬────────────┬──────────────┬────────────┬────────────────┐ │ ## ┆ health ┆ status ┆ index ┆ uuid ┆ pri ┆ rep ┆ docs.count ┆ docs.deleted ┆ store.size ┆ pri.store.size │ ╞═════╪════════╪════════╪═══════════════════════════╪════════════════════════╪═════╪═════╪════════════╪══════════════╪════════════╪════════════════╡ │ 1 ┆ green ┆ open ┆ index_activity ┆ VCi1Df7tQLemZrSnYLyHZg ┆ 1 ┆ 0 ┆ 87971760 ┆ 8576108 ┆ 21.6gb ┆ 21.6gb │ │ 2 ┆ green ┆ open ┆ index_protection_copy_set ┆ NzMVnlX_RPG0v3dUXXGznA ┆ 1 ┆ 0 ┆ 15634865 ┆ 4026853 ┆ 6gb ┆ 6gb │ │ 3 ┆ green ┆ open ┆ index_asset_protection_de ┆ nujLQyzmRTuCsDlR5ncvBw ┆ 1 ┆ 0 ┆ 6131465 ┆ 1282123 ┆ 3.4gb ┆ 3.4gb │ │ ┆ ┆ ┆ tail ┆ ┆ ┆ ┆ ┆ ┆ ┆ │ │ 4 ┆ green ┆ open ┆ index_protection_copy ┆ JPGjWiwUS0Wilw87QWE_OA ┆ 1 ┆ 0 ┆ 2542590 ┆ 277569 ┆ 2.1gb ┆ 2.1gb │
-
We observed high ESDB process utilization where CPU spikes up to 900%
Cause
The problem was found with Elasticsearch index_activity indice.
The node is overloaded when estimating the amount of memory a field needs in order to be loaded into the JVM heap. It prevents the field data loading by raising an exception if the operation heap exceeds the limit.
Resolution
Reduced the capacity size of the ESDB:index_activity indice, in a way to drop obsolete docs from the table, which improve overall ESDB performance.
Since this is an internal task, DO NOT consider taking any changes on your own!
This activity requires support attention, file a Service request with the Dell team.