Start a Conversation

Unsolved

This post is more than 5 years old

3694

February 13th, 2013 03:00

RecoverPoint Write phase for Array Based Splitter?

Hi Experts,

Can some one help me in understanding the Write phase for RecoverPoint Array based Splitters?

I know that in case of Host based splitters the write phase is as follows:

1.The production host writes data to the production volumes which is intercepted by the splitter. The splitter sends the write data to the RPA.

2.Immediately upon receipt of the write data, the local RPA returns an ACK to the splitter.

3.The splitter then writes the data to the production storage volume.

4.The storage system returns an ACK to the splitter upon successfully writing the data to storage.

5.The splitter sends an ACK to the host that the write has been completed successfully.

Can someone tell how this happen in case of Array Based splitters?

Thanks,

44 Posts

February 14th, 2013 20:00

Can someone help me in this?

Thanks,

February 14th, 2014 09:00

Shubh,

A write operation with the Symmetrix write splitter would be something of this sort:

for a CDP configuration,

1) An application server issues a write to a LUN that is being protected by RecoverPoint. This write is “split” from within the array which has the splitter installed, then sent to the RPA.

2) When the copy of the write is received by the RPA, the backlog is synchronously mirrored to another RPA after which the write is acknowledged back to the array splitter. Then the ACK is sent back to the host.

3) Once the RPA has acknowledged the write, it moves the data into the local journal volume, along with a timestamp and any application, event, or user-generated bookmarks for the write.

4) Once the data is safely in the journal, it is distributed to the target volumes ensuring that write order is preserved during this distribution.

Note: The Symmetrix splitter does not have a backlog component within itself. The solution is to synchronously mirror the backlog to another RPA as described in step 2. This will avoid a full sweep in case of an RPA failure before data is replicated. This process is called Backlog Mirroring (BLM).

For a CRR configuration,

1) The application server issues a write to a LUN that is being protected by RecoverPoint. This write is “split” then sent to the RPA.

2) When the copy of the write is received by the RPA, it is immediately acknowledged back from the local appliance when running in asynchronous mode. When running in synchronous mode, the ACK is delayed until the write has been received at the remote site.

3) Once the appliance receives the write, it will bundle this write up with others into a package. The writes are sequenced and stored with their corresponding timestamp and bookmark information. The package is then deduplicated and compressed, and an MD-5 checksum is generated for the package.

4) The package is then scheduled for delivery to the remote appliance.

5) Once the package is received there, the remote appliance verifies the checksum to ensure the package was not corrupted in the transmission. The data is then uncompressed and inflated.

6) Next the data is written to the journal volume.

7) Once the data has been written to the journal volume, it is distributed to the remote volumes, ensuring that write-order sequence is preserved.

hope this helps.

2 Intern

 • 

1.1K Posts

February 16th, 2014 19:00

cdp.png

1.An application server issues a write to a LUN that is being protected by RecoverPoint. This write is “split,” then sent to the RecoverPoint appliance in one of three ways:

     •Host Splitter – Exists in the I/O stack, residing below any file system and volume manager, and just above any multi-path driver, such as EMC PowerPath. The splitter looks at the destination for the write packet. If it is to a LUN that RecoverPoint is protecting, the splitter sends a copy of the write packet to the RecoverPoint appliance.

     •Storage Array Splitter – Writes sent to the production volume are split and sent to the production volume and to the RPA. In the CLARiiON and VNX, the splitters are stored in each SP. The VMAXe splitter is part of the Enginuity operating environment.

     •Fabric Splitter – Resides on the SAN switch. Writes sent to the production volume are split and sent to the production volume and to the RPA.

2.When the copy of the write is received by the RecoverPoint appliance it is acknowledged back. This “ack” is received by the Host splitter, where it is held until the “ack” is received back from the production LUN. Once both “acks” are received, the “ack” is sent back to the host, and I/O continues normally.

3.Once the appliance has acknowledged the write, it moves the data into the local journal volume, along with a timestamp and any application, event, or user-generated bookmarks for the write.

4.Once the data is safely in the journal, it is distributed to the target volumes ensuring that write order is preserved during this distribution.

crr.png

1.An application server issues a write to a LUN that is being protected by RecoverPoint. This write is “split,” then sent to the RecoverPoint appliance the same as is done in a CDP deployment. From this point, whether the split is done by a host-based, fabric based, or array based splitter, the original write travels though its normal path to the production LUN.

2.When the copy of the write is received by the RecoverPoint appliance, it is immediately acknowledged back from the local RecoverPoint appliance, unless synchronous remote replication is in effect. If synchronous replication is in effect, the “ack” is delayed until the write has been received at the remote site. Once the “ack” is issued, it is processed by the Host splitter, where it is held until the “ack” is received back from the production LUN. Once both “acks” are received, the “ack” is sent back to the host, and I/O continues normally.

3.Once the appliance receives the write, it will bundle this write up with others into a package. Redundant blocks are eliminated from the package, and the remaining writes are sequenced and stored with their corresponding timestamp and bookmark information. The package is then compressed, and an MD-5 checksum is generated for the package.

4.The package is then scheduled for delivery across the IP network to the remote appliance.

5.Once the package is received there, the remote appliance verifies the checksum to ensure the package was not corrupted in the transmission.

6.The data is then uncompressed and written to the journal volume.

7.Once the data has been written to the journal volume, it is distributed to the remote volumes, ensuring that write-order sequence is preserved.

No Events found!

Top