Replication in ECS is handled on a continuous basis, not on a schedule. On the ECS dashboard, you can see three items:
RPO
Data Pending Geo-Replication
Replication Rate
RPO is your Recovery Point Objective. This is basically how far behind the remote site is in case of a disaster in the primary site, e.g. 30 seconds. The next two settings show how much data needs to be replicated and how fast replication is running. If you click the "Geo Monitoring" header, you can drill-down to see details about replication activities.
All data written to disk in ECS is compressed before writing to chunks, so it will be transmitted over the wire in compressed format. All replication traffic is also encrypted over the wire using AES256. Upon arrival, we write the chunk data in the same format, so there's no decompression or unpacking required.
What could possibly be the issue if the replication data is super high, like 1.3 TB high, what would be something you think that could be causing that?
JasonCwik
281 Posts
1
June 8th, 2016 09:00
Hi Mike,
Replication in ECS is handled on a continuous basis, not on a schedule. On the ECS dashboard, you can see three items:
RPO is your Recovery Point Objective. This is basically how far behind the remote site is in case of a disaster in the primary site, e.g. 30 seconds. The next two settings show how much data needs to be replicated and how fast replication is running. If you click the "Geo Monitoring" header, you can drill-down to see details about replication activities.
All data written to disk in ECS is compressed before writing to chunks, so it will be transmitted over the wire in compressed format. All replication traffic is also encrypted over the wire using AES256. Upon arrival, we write the chunk data in the same format, so there's no decompression or unpacking required.
For more information, I'd suggest the ECS Overview and Architecture White Paper, available on the documentation page: Elastic Cloud Storage 2.2.x Product Documentation
samuel_cox
1 Rookie
•
3 Posts
0
October 23rd, 2023 19:09
What could possibly be the issue if the replication data is super high, like 1.3 TB high, what would be something you think that could be causing that?