This article summaries 2012 Chinese ATE activity: “Integration of replication software on CLARiiON and VNX integration with host application”. The original thread is https://community.emc.com/thread/163697.
Can we replicate data between VNX and the third party SAN storage using SAN Copy? Does it support data migration from iSCSI storage to FC storage?
Yes, data can be replicated between VNX and third party storage using SAN Copy. Please refer to <EMC Support Matrix>. Go to https://elabnavigator.emc.com/ and select PDFs and Guides-> Information Management Software->SAN Copy. You can download the support matrix from that page.
In order to work with VNX SAN Copy, the third party storage must use WWN to mark LUN address. As far as I know, some third party storage uses the iSCSI name, IQN or EUI to mark LUN UID. In this case, such storage devices cannot replicate data with CLARiiON or VNX using SAN Copy.
SAN Copy sessions can run over either FC or iSCSIconnections. However when one storage device is running over FC connection while the other is running over iSCSI connection, FC-iSCSI gateway must be employed. This may be a rare case because almost all the storage devices nowadays support both iSCSI and FC connections.
What is the functionality of Write Intend Log? What is its difference from fracture log?
Write Intend Log is a bitmap used in MirrorView/S. It consists of two 128M LUN. It tracks changes on the primary image. The use of WIL is optional. By default, it is enabled.
Fracture log tracks changes on the primary image when the MirrorViewsession becomes fractured. When the status becomes synchronized again, incremental synchronization with fracture log reduces the time of synchronization.
The discussion about the main differences
WIL starts working once it is enabled while fracture log starts working only when MV session is fractured. WIL is stored on disks
Fracture log is stored on SP memory. Once SP is rebooted by accident, the fracture log may be lost. If the primary LUN is trespassed to another working SP, the fracture log will be also trespassed.
We are doing reverse synchronization using SnapView/clone. After it is completed, will the clone status become fractured automatically if the protected restore option is enabled? Will the status be the same if the protected restore option is disabled?
Reverse synchronization has the function called instant restore. As long as the restore progress starts, the image of the source LUN becomes exactly the same as the clone LUN. The synchronization is running in background so that applications on the source LUNs may become available.
The option protected restore is used to protect data on the clone LUN when reverse synchronization starts. That means with the protected restore option enabled, all the data written to the source LUN will not be synchronized to the clone LUN. Once reverse synchronization is completed, data on the source LUN is different from that on clone LUN. The clone will become fractured.
If the option protected restore is not selected, all the data written to the source LUN will be synchronized to the clone LUN in time. Once reverse synchronization is completed, data on the source LUN is the same as that on the clone LUN. The clone will become synchronized.
Please give some brief introduction to Celerra Replicator, and how this technology applies to EMC products and what are its limitations.
Celerra Replicator is a replication technology running on file system. The source and destination file systems must be of the same size. They could be on the same data mover or two data movers on a Celerra, or even two different Celerra systems. If source file system and destination file system are running on two different Celerra systems, the physical network connections and trusted authentication must be established first.
How to enable the replicator function on Celerra/VNX? Is it enough just to install an enabler?
Replicator can function as long as ReplicatorV2 Licensed option is selected. Use the following directory to enable the replicator function.
For VNX series, go to VNX>Settings and select Manage License for File in the lower right corner.
For Celerra series, go to Celerra>System>System Information and select Manage License for File in the upper left corner.
Does Replicator V2 affect the performance a lot? For example, I have tested the Replicator V2 on VNX for File. Before the replication started, there were 1000 IOPS on host side. When the first replication is completed, IOPS became 500. Could you share some experiences on that?
VNX Replicator V2 is based on snapshot technology, using the copy on first write technology. There is some latency to write operation but no impact on read operation. If the performance requirement is high, check the number of running snapshot sessions.
In the initial replication, a large amount of data is transmitted which occupies the system resources. It is recommended to start initial replication when the workload is not high. If the initial replication has been completed, then only the snapshot sessions are left to affect the performance.
Another factor that affects the performance is the type of applications running on the hosts. In your example, the I/O amount dropped from 1000 to 500 because of the replication. A possible cause maybe the I/O workload from applications dropped suddenly. If the only thing running is this replication session, the difference should not be so large. I recommend checking on whether other snapshot sessions are running.
One other thing, AV scanning on file systems can also affect the file system performance.
[Q&A for solutions]
The labs of my company are located at three different places, two in Shanghai and one in Beijing. The two labs in Shanghai are 30 kilometers away and connected over DWDM. All these sites are open to external users. Northern users use China Unicom while Southern users use China Telecom. The company is planning to replace the current IT system with a new one. The host will be from HP and network from Cisco. Local and remote data replication is always a challenge for us. So I would like to ask few questions like:
1. Which technology on EMC VNX can provide a replication solution that works with the HP host?
2. Which technology on EMC VNX can provide the smallest RPO and shortest RTO? Could you share some documents?
3. Is it possible to work out a solution to replicate data between two VNXs over 800 kilometers away? How can we ensure that this solution works? I will appreciate if you could provide some documents for reference.
1. On CLARiiON or VNX, only SAN copy software can be used to replicate data with a third party storage. There are two types of SAN Copy, full SAN copy and incremental SAN copy. Full SAN Copy is used for quick data migration while the typical application of incremental SAN Copy is data distribution. In order to guarantee the data consistency and validation, the script should be complied manually to work with running applications. The configuration may be complex. MirrorView/A uses incremental SAN copy technology but it is limited between CLARiiON or VNX series.
SAN Copy virtualizes the front end ports of CLARiiON or VNX SP into initiators. To other third party storage devices, these are like host HBA or iSCSI initiators. LUN masking on the third party storage devices must be enabled so that the SP of CLARiiON/VNX can control these remote LUNs like local LUNs. That is why the replication is fast.
2. I would like to introduce the local replication technology SnapViewClone. The data in the production LUN is synchronized to clone LUN in time so that the physical faults will not cause data loss. RPO is zero and data recovery needs to be done manually. When the production LUN becomes unavailable, Clone LUN can be presented to the host directly so RTO is reduced to the minimum.
3. For remote replication, the main concern is the bandwidth and response time. FC network can only be leveraged with DWDM so the distance can be extended to 200 kilometers. The bandwidth of the Ethernet network cannot meet the performance requirements of synchronous replication so the distance is limited to within 200 kilometers. Since the deployment of the DWDM network is very expensive, normally the distance for synchronous replication is around from 10 to 40 kilometers.
For long distances, we recommend asynchronous replication for a better response time.
Unlike high-end storages, there are no combinations of synchronous and asynchronous replication on CLARiiON or VNX. If zero RPO is required for a long distance transmission up to 800 kilometers, it is recommended to apply a recovery solution in three sites.
We know zero RPO can be achieved in VNX at one site. In the technology we have now, for databases, network and host, the secondary object can be promoted to be the primary one seamlessly, can storages follow that concept, where we can seamlessly promote the secondary storage to be the primary one?
Can the secondary storage be promoted seamlessly between two VNX at two sites where 1G DWDM channel is deployed between about 30 kilometers?
To backup data at three sites, we should use both synchronous and asynchronous replication. Synchronous replication should work for short distances between site 1 and site 2. The asynchronous replication should work for unlimited distances. For example, when the data at site 1 becomes unavailable, synchronized data at site 2 will be transmitted to site 3 through long distance. It will ensure that RPO is zero. Of course, there are a lot of other models that can be applied in Symmetrix.
When MirrorView/S is used between two CLARiiON and VNX devices, only source the LUN can be accessible for the production server. Without other software, MirrorView/S alone requires a manual promotion. Storage must be combined with software on the hosts to achieve what we expect. When mirrors are synchronized, the secondary image can be promoted to be the primary image. The secondary host then may take over the LUN.
Since data is replicated from the primary image to the secondary image, we can deploy a dual active mode with different storage systems. Let me make an example: Application A is the primary image for storage system A while the secondary image for storage system B. Application B is the primary image for storage system B while the secondary image for storage system A. The secondary image is not accessible for the servers directly. A lot of customers run applications in primary image while backing up data in secondary image.
When the storage product is combined with the virtualized product, it is possible to achieve access without interruption. LUN can be replicated synchronously and asynchronously. This is EMC VPLEX. Both RPO and RTO are optimized. For the 30 kilometers distance, when considering the cost, DWDM is not necessary. Two FC switches plus an extender as an amplifier should work for 40 kilometers.
Real experience sharing
How does Snapshot/Clone work with other backup software? Why is snapshot required since there arealready lots of backup software?
The key consideration for choosing a backup software is the impact on system performance. If the business workload is quite low at night while only heavy during working hours, we may perform the backup at night. In this case, performance is not impacted and backup software can be used directly for online backup.
Now if the business workload is heavy all the time, for 24 hours, and the performance requirement is extremely high, the replication technology such as snapshot/clone is a better choice. Clone has the minimum impact on performance.
Now let me explain how a backup solution consisting of clone, backup software and replication manager works. First of all, the replication manager will determine when is the best time to perform backup by interacting with the host applications. Then a full copy of the data will be generated by a clone. This copy will be presented to the backup server by the backup software.
As discussed above, the key consideration is the performance impact. Backup directly on the production server will have the largest impact on performance and lowest speed. Clone running on the storage itself will be about a 10% impact on performance.
A. Data replication based on LUN
1. Snapshot and Clone
Snapshot and clone are both local replication technologies. They replicate data within one storage system.Snapshot is a point-in-time pointer to the source LUN. A typical scenario is to create snapshots on the source LUN and back up data in snapshots.
Clone is a full volume mirror of the source LUN. Clone LUN is the same size as the source LUN and synchronizes with it in time. A typical scenario is to create clones on the source LUN and then take tests on the clones.
2. SAN Copy
There are two types of SAN Copy, full copy and incremental SAN Copy. It replicates data between two storage systems. It also supports data replication between CLARiiON or VNX and a third party storage system. Data can be migrated from small LUN to a large LUN.
Full copy is copying the original data in LUN. A typical scenario is a quick data migration between different storage systems.
Incremental Copy is copying data incrementally, based on snapshots. A typical scenario is data distribution for up to 100 copies can be created quickly and synchronized incrementally.
There are two types, MirrorView/A andMirrorView/S. It replicates data within two different CLARiiON or VNX. It is only possible to access secondary image when MirrorView is combined with snapshot or clone.
MirrorView/S is synchronous replication. Data is written to source and destination storage synchronously. High bandwidth is required for low response time. The applicable distance is short and usually within one city.
MirrorView/A is asynchronous replication. RPO is reduced to a few minutes or hours. Incremental SAN copy or snapshot is running in the background. The applicable distance is long and usually across two different cities.
B. Data replication based on file system
It uses snapshot to replicate data on Celerra or VNX file systems, which are also called checkpoints. It supports up to 96 read only snapshots and 16 readable and writeable snapshots. It uses the copy on first write technology. It is typically used for data backup.
2. Replicator V2
It uses asynchronous replication to replicate data on Celerra or VNX file systems. Snapsure is running in the background for incremental updates. The source LUN and destination LUN are the same size. It typically is used for disaster recovery and data migration.
There are three types of replicator:
• Loop back replicator- data replication between two file systems within one data mover
• Local replicator-data replication between file systems on two different data movers within one Celerra or VNX
• Remote replicator-data replication between two different Celerra or VNX
C. Other replication software
1. Recover point
It supports local and remote replication either in synchronous or asynchronous mode. It is typically used for disaster recovery. It supports a consistent data recovery with multiple images.
2. Replication Manager
It interacts with the host applications intelligently and enables the host application to trigger the replication or update by itself.
Author: Nancy Qian
Please click here for for all contents shared by us.