PowerProtect: Data Manager ESDB - Data for stor, data for aktivitet

Summary: PowerProtect Data Manager-aktivitetene stoppet på grunn av dokumentøkning med Elasticsearch-databasen. Feilene som ble funnet var: Data for store, data for [_id] ville være [859222328/819.4mb], som er større enn grensen på [858993459/819.1mb]" ...

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

Aktiviteter i PowerProtect Data Manager for eksempel: Sikkerhetskopier, gjenoppretting eller replikeringsprosesser påvirkes på grunn av problemet med Elasticsearch.

  • Følgende feil vises i Elasticsearch index_activity indice:

                    "reason": Object {
                        "caused_by": Object {
                            "caused_by": Object {
                                "bytes_limit": Number(858993459),
                                "bytes_wanted": Number(859222328),
                                "durability": String("PERMANENT"),
                                "reason": String("[fielddata] Data too large, data for [_id] would be [859222328/819.4mb], which is larger than the limit of [858993459/819.1mb]"),
                                "type": String("circuit_breaking_exception"),
                            },
                            "reason": String("CircuitBreakingException[[fielddata] Data too large, data for [_id] would be [859222328/819.4mb], which is larger than the limit of [858993459/819.1mb]]"),
                            "type": String("execution_exception"),
                        },
                        "reason": String("java.util.concurrent.ExecutionException: CircuitBreakingException[[fielddata] Data too large, data for [_id] would be [859222328/819.4mb], which is larger than the limit of [858993459/819.1mb]]"),
                        "type": String("exception"),
                    },
                    "shard": Number(0),
                },
  • Elasticsearch-loggene viser følgende:

    org.elasticsearch.transport.RemoteTransportException: [local][127.0.0.1:14400][indices:data/read/search[phase/query]]
    Caused by: org.elasticsearch.search.query.QueryPhaseExecutionException: Query Failed [Failed to execute main query]
            at org.elasticsearch.search.query.QueryPhase.executeInternal(QueryPhase.java:228) ~[elasticsearch-7.17.17.jar:7.17.17]
            at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:98) ~[elasticsearch-7.17.17.jar:7.17.17]
            at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:459) ~[elasticsearch-7.17.17.jar:7.17.17]
            at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:623) ~[elasticsearch-7.17.17.jar:7.17.17]
            at org.elasticsearch.search.SearchService.lambda$executeQueryPhase$2(SearchService.java:484) ~[elasticsearch-7.17.17.jar:7.17.17]
            at org.elasticsearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:47) [elasticsearch-7.17.17.jar:7.17.17]
            at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:62) ~[elasticsearch-7.17.17.jar:7.17.17]
            at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.17.jar:7.17.17]
  • Elasticsearch-størrelsen index_activity indice ble funnet som 21,6 GB

    ┌─────┬────────┬────────┬───────────────────────────┬────────────────────────┬─────┬─────┬────────────┬──────────────┬────────────┬────────────────┐
    │ ##  ┆ health ┆ status ┆ index                     ┆ uuid                   ┆ pri ┆ rep ┆ docs.count ┆ docs.deleted ┆ store.size ┆ pri.store.size │
    ╞═════╪════════╪════════╪═══════════════════════════╪════════════════════════╪═════╪═════╪════════════╪══════════════╪════════════╪════════════════╡
    │ 1   ┆ green  ┆ open   ┆ index_activity            ┆ VCi1Df7tQLemZrSnYLyHZg ┆ 1   ┆ 0   ┆ 87971760   ┆ 8576108      ┆ 21.6gb     ┆ 21.6gb         │
    │ 2   ┆ green  ┆ open   ┆ index_protection_copy_set ┆ NzMVnlX_RPG0v3dUXXGznA ┆ 1   ┆ 0   ┆ 15634865   ┆ 4026853      ┆ 6gb        ┆ 6gb            │
    │ 3   ┆ green  ┆ open   ┆ index_asset_protection_de ┆ nujLQyzmRTuCsDlR5ncvBw ┆ 1   ┆ 0   ┆ 6131465    ┆ 1282123      ┆ 3.4gb      ┆ 3.4gb          │
    │     ┆        ┆        ┆ tail                      ┆                        ┆     ┆     ┆            ┆              ┆            ┆                │
    │ 4   ┆ green  ┆ open   ┆ index_protection_copy     ┆ JPGjWiwUS0Wilw87QWE_OA ┆ 1   ┆ 0   ┆ 2542590    ┆ 277569       ┆ 2.1gb      ┆ 2.1gb          │
  • Vi observerte høy ESDB-prosessutnyttelse der CPU-toppene er opptil 900 %

 

Cause

Problemet ble funnet med Elasticsearch index_activity indice.
Noden er overbelastet når man estimerer hvor mye minne et felt trenger for å lastes inn i JVM-haugen. Den forhindrer innlasting av feltdata ved å opprette et unntak hvis operasjonsheapen overskrider grensen.

 

Resolution

Reduserte kapasitetsstørrelsen til ESDB:index_activity indeksen på en måte som fjerner foreldede dokumenter fra tabellen, noe som forbedrer den generelle ESDB-ytelsen.
Siden dette er en intern oppgave, IKKE vurder å ta noen endringer på egen hånd!
Denne aktiviteten krever støtteoppmerksomhet. Send inn en serviceforespørsel til Dell-teamet.

 

Article Properties
Article Number: 000227476
Article Type: Solution
Last Modified: 02 Sept 2025
Version:  2
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.