Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2571

June 11th, 2012 06:00

Question regarding Remote Mirror setup

Hi,

I am in process of setting up DR for 1 server using remote mirror using mirror view.I have setup the remote mirror in async mode.2 LUNs are consistent now and 2 are in synchronizing mode.

Now I have to present the DR LUNs to ESX servers.But the ESX admin saying that when they create datastore,the LUN will get formatted.So i want to know how can i proceed fro here.

my thought is

1. fracture the seconday mirror

2.present the LUn to DR server

3.Once the datastore has been created and can synchronize it

can somebody tell this the correct ptocedure.I didnt do the these procedure before.and if there is any update to DR mirror(secondary),will that get copied to primary mirror

Thanks in advance

its a CX4 120

247 Posts

June 12th, 2012 05:00

Yes, in that case you would promote the secondary image and add it to the storage group with the DR host. Then rescan and follow ksp's procedure:

ksp wrote:

3. ESX still won't automatically mount this new primary volume right away, because it is on a different storage and has different address, and it gets detected as a snapshot. Now you have to:

a) Make sure that the original Vol1 isn't and won't be accessible to ESX during the DR. It's ok if it is secondary now, or you have to disable access to it by other means.

b) Mount the new primary volume Vol2 using either resignature or temporary mount -- choose which is more appropriate for your situation:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1011387

We use temporary mount with esxcfg-volume -m in our tests, because we put everything back shortly after. (Caution: there was a bad experience with persistent mount esxcfg-volume -M, we had to reboot all ESX hosts to get the volume back, just a heads-up for you)

148 Posts

June 11th, 2012 09:00

Hi,

My ESX admin saying whenever he is trying to create a datastore using the replicated LUN,it showing as "The hard disk is blank"

am not familiar with these steps..can you tell me the step by step ? That will really helpful for me

June 11th, 2012 09:00

Hi Anoop,

As per my knowledge, Once you fracture the Mirror and do some changes in the secondary mirror(LUN) and once you re-sync the mirror, the changes that you made in the secondary mirror(LUN) will not have any impact on the primary mirror(LUN)., and it only sync’s the data from the primary mirror(LUN) to the secondary mirror(LUN).

Any inputs here ……

Regards,

Suman Pinnamaneni

148 Posts

June 11th, 2012 09:00

correction : CX4 120..Typo error

1K Posts

June 11th, 2012 09:00

When you present the target LUN to the ESX server(s) you will need to click on the Add Storage button in ESX. You will be asked whether you want to re-signature the LUN. Select Yes. The data will not be erased on the LUN.

June 11th, 2012 09:00

Hi Anoop,

May I know if both the Arrays are CX4 120’s , and also let us know FLARE release for both the arrays.

Regards,

Suman Pinnamaneni

148 Posts

June 11th, 2012 09:00

Hi Suman,

Yes.Both arrays are CX4 122. FLARE revision is 4.29.000.5.014 on both sides.

1K Posts

June 11th, 2012 10:00

That's because he hasn't selected to re-signature the volume when he adds the datastore. When he goes to add the VMFS datastore tell him to choose the re-signature option.

148 Posts

June 12th, 2012 04:00

Hi All,

ESX admin says,if this is  a VMFS labelled LUN,then only we will get re-signature option.I am not sure.Here the screen shot that he provided.

$B72C69C5CD015AD.bmp

I just want to clarify few things here.

I am basically from symmetrix background..In symmetrix we will use SRDF to setup the DR and we can use the target LUN during disaster for resuming the application.But in Clarrion is it possible to use the same remote mirror LUN for same purpose,I read few docs related to Mirror view and its  taking a snapshot of the remote mirror and provide to host during disaster..Is that mean we cant directly provide the target LUN to host ?

11 Posts

June 12th, 2012 05:00

Replicated (secondary) volume is NOT accessible to a host even if it is fractured. It may show up as a disk, but it is not accessible, you have to promote it to primary or take it out of the mirror to access it again. Nice explanation is here (particularly pages 19, 56): https://powerlink.emc.com/nsepn/webapps/btg548664833igtcuup4826/km/live1/en_US/Offering_Technical/White_Paper/h2417-mirrorview_knowledgebook-flare-wp.pdf

Here's the procedure we use for DR with MV/S and VMware. I'm not familiar with MV/A, but I'm sure it's pretty much the same.

1. Make a mirror pair and let it synchronize. Let's call the volumes "Vol1" and "Vol2".

2. Present both Vol1 and Vol2 to a ESX servers -- we don't use a specific server for DR, everything is active-active. Presenting secondary LUN may be not exactly a oficially recommended practice, but that makes a couple of steps less to do in DR situation. No ill effects seen after many years and countless DR tests.

3. Make VMFS datastore on the primary volume Vol1.

4. Done and ready for use. Nothing has to be done to the secondary Vol2.

In a case of DR situation when you have lost the primary volume Vol1:

1. Shut down all affected guests.

2. Promote the secondary copy Vol2 to a primary -- now it becomes available to ESX.

3. ESX still won't automatically mount this new primary volume right away, because it is on a different storage and has different address, and it gets detected as a snapshot. Now you have to:

a) Make sure that the original Vol1 isn't and won't be accessible to ESX during the DR. It's ok if it is secondary now, or you have to disable access to it by other means.

b) Mount the new primary volume Vol2 using either resignature or temporary mount -- choose which is more appropriate for your situation:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1011387

We use temporary mount with esxcfg-volume -m in our tests, because we put everything back shortly after. (Caution: there was a bad experience with persistent mount esxcfg-volume -M, we had to reboot all ESX hosts to get the volume back, just a heads-up for you)

4. If the array that hosted the primary Vol1 was not available during promotion, you have to clean up your old primary volume Vol1 by force destroy the orphaned mirror, adding Vol1 as secondary, and resync it back. Refer to MV documentation for the steps.

This procedure seems to be complicated, but it really isn't. Things get a bit ugly only when you have to manage mirrored RDMs. Just test this and keep the procedure readily available so you won't have to search the web for it. You can write some scripts, and there are software suites that does all of this pretty much automagically for you -- if you can afford them, that is.

247 Posts

June 12th, 2012 05:00

Anoop,

What version of ESX are you running? I can recall you had to change an advanced setting in ESX 3.5; i'm not sure if this is still necessary on ESX4/5.

The full procedure to build a DR environment is as follows. You've already done some of the steps, so skip those where appropriate:

Set up the environment:

- Create two identical LUNs, one on the primary system, one on the secondary.

- Make sure MirrorView is correctly configured (MirrorView Connections established etc).

- Create a mirror on the primary system.

- Add the secondary image using the LUN on the secondary array. Make sure you keep the "Initial Sync" button checked, otherwise you'll have garbage on the target LUN.

- Wait for the state of the secondary image to change from "Synchronizing" to either "Synchronized" or "Consistent" (this depends on whether there is still I/O going to the primary LUN; if the LUN is still used, Consistent is the best you'll get...).

Once this is in place, you can be pretty certain that your data is mirrored.

Now, to do a DR test or a failover, the following steps have to be taken:

- Fracture the mirror. This stops the sync.

- Promote the secondary image. This enables it for host I/O.

- Add the secondary image to a storage group. Make sure your primary image is NOT allocated to the same ESX host though, or your ESX host will get very confused.

- Rescan. Previously, if you had the resignature setting ENABLED, you would see your datastores pop up again with some sort of -snap title.

- Or, as per Ernes a couple of posts above, use the "add storage" button in the later versions of VMware.

The above steps are for an actual failover. The downside here is that, after a promote your secondary image, it's now your production LUN. So to get back to your initial situation, you now have to:

- Reverse the mirror (basically, recreate it)

- stop host I/O to gracefully fallback

- again fracture and promote -> production is now at it's old location.

- Again, recreate your mirror.

So this takes lots of time and pushes gigantic amounts of data across your links, since each time you have to do an initial sync (you lose your logs with each promote).

For a simple DR test, you can also fracture the mirror (to stop the I/O to the secondary image), but NOT promote the secondary image. Instead, utilize SnapView to create a snap of the secondary image. This snap is now accessible for host I/O and can be added to a storage group. Do your testing. Afterwards, remove the snap and restart your mirror: it should pick up where it left.

(Or you can even NOT fracture it, but then you need quite a large RLP depending on the mirrored workload).

Hope this makes sense.. if you have any questions, don't hesitate!

148 Posts

June 12th, 2012 05:00

But here are going to build a new Server in DR site.So the primary and secondary host will be different.is this case how to proceed ?

promote remote LUN and assign to the DR host ?here servers are VMs and unabler to create datastore using the DR LUN)

No Events found!

Top