9 Legend

 • 

20.4K Posts

March 31st, 2009 09:00

simply add another target LUN to your clone group

Naviseccli -user Administrator -password password -scope 0 -address 192.168.1.10 snapview -addclone -name OracleGroup -luns 363

You might want to look at setting "-SyncRate" value to "low" to have as less impact on source during initial sync.

9 Legend

 • 

20.4K Posts

March 21st, 2009 19:00

if you have SnapView feature you could explore cloning your LUNs, that will give you a completely separate copy of you data. Unless your ESX boots from SAN, you need to think how you are going to recover ESX system itself. As far as cloning from FC LUNs to SATA ..it will work but you have to realize that it will only go as fast as SATA drives can handle it ..in the mean time your FC drives will have to service host requests and SnapView operations. The very first time you clone them it will be a long process but next subsequent sessions will be incremental so SATA may suffice.

39 Posts

March 22nd, 2009 19:00

There's a couple of questions I have for you. For DR, how quickly do you need the data to be available. How old can that data be once it is available? If you are looking for synchronus repliaction, if the host writes corrupted data, the corruption is copied to the replicated LUN.

2 Intern

 • 

234 Posts

March 22nd, 2009 23:00

Hi,

We need full copy of LUN assigned to VMware ESX server immediately > so plan to achieve it by Snapview Clone.


Is there a different way I would need to configure Clones so that the destination or the clone lun resides on SATA disks instead of using FC disks?

regards,
Samir

39 Posts

March 23rd, 2009 07:00

I'm not sure if you can make a clone of FC disks using SATAs, but if you did, you would probably significantly decrease the performance of your source LUNs. If you really are considering DR, and you're forced to use SATA drives, it sounds like budget is a significant issue. Perhaps, for your purposes, it may be worth considering something host based, if your hosts can handle the extra load.

9 Legend

 • 

20.4K Posts

March 23rd, 2009 07:00

I'm not sure if you can make a clone of FC disks
using SATAs, but if you did, you would probably
significantly decrease the performance of your source
LUNs.


i am assuming you mean decrease in performance during snapview session ? Regardless if target is FC or SATA ..there will be decrease in performance during the session. I see with SATA it's being a longer period of time but because it's so much slower ..it can't bottleneck the source LUN that bad. That's my opinion of course :)

9 Legend

 • 

20.4K Posts

March 23rd, 2009 08:00

ah ..i see your point ..if you leave clones synchronized that would hurt performance, but for DR i would assume he would fracture them and re-sync every so often. If you leave them in synchronized mode ..as soon as there is corruption on source LUN ..there goes your clone copy as well.

39 Posts

March 23rd, 2009 08:00

That was why I asked him how old the data could be. His reply made it sound like he needs real time replication. For that reason, a host based product may be worth considering... if the server can handle the load. DR isn't cheap, and I can not think of a real solution for him using just the Clariion layered apps, without some significant drawbacks and risk. If someone else can, I would LOVE to hear it. ;)

9 Legend

 • 

20.4K Posts

March 23rd, 2009 08:00

i am talking about SnapView Clone ..not SnapView Snapshot ..so there should be no COFW penalty.

39 Posts

March 23rd, 2009 08:00

I honestly don't know how much the SATA drives would slow things down. I expect on sequential type stuff, it wouldn't be too bad, but for random I/O, I've been told that SATA's simply can't keep up under heavy load. I have not personally done any performance analysis on SATA other than sychronus stuff, so I can not tell you from experience.

SNAP sessions can really hurt too tho... depending on how the app handles latency

39 Posts

March 23rd, 2009 08:00

Also, the way it sounds like he'll be using it, the clone will usally be in sync. That in and of it self will hurt, but I think the SATAs could potential exacerbate that significantly. Again, I have no data to back this up, I'm just thinking logically..... or at least I'm trying to ;)

4 Operator

 • 

5.7K Posts

March 23rd, 2009 08:00

During the resync there's the COFW, since syncing is a background process, but if a write occurs on the source, the data is actually copied to the target first. It's just like Snapview Snapshotting...... it's just that in the eand the target is 100% equal to the source.

9 Legend

 • 

20.4K Posts

March 23rd, 2009 09:00

Rob ..where is this described, i just looked over "SnapView for Navisphere Administrator's Guide" and it only mentions COFW for Snapshotting.

39 Posts

March 23rd, 2009 09:00

It doesn't talk about COFW that I noticed, but it does talk about how the I/O is handled when a clone is synced. The ack isn't sent until the data is written to both the source and the target. It also talks about the performance of SATA drives as clones... but I believe this is assuming that the clone is fractured once it is synced. This definitely poses some interesting options that I wasn't aware of, but I believe the OP is looking for real time replication... which implies being insynch.
No Events found!

Top