Start a Conversation

Unsolved

This post is more than 5 years old

A

5 Practitioner

 • 

274.2K Posts

6615

January 15th, 2018 08:00

Cipher with VNX

Hi all !

I have performance problems with openstack (cipher) using an VNX ...

The origin lun is on SPA default, owner and allocation

I detect that when we do a clone it create an snap, after it, it create a new lun on SPB and it move it to SPA, and it copy all data from snap to new LUN ...

The full process spend 28 minutes for a lun of 100 GB ... This is a very low ratio copy for all internal copies.

No activity on VNX and the origin lun is not assigned to any host.

Is it possible that the trespass affect to performance? This ratio is normal for this operation? It's more or less 61 MB/sec, on a pool of 50 SAS 15k disks.

Thanks and regards !

Dani

5 Practitioner

 • 

274.2K Posts

January 24th, 2018 18:00

Hi Danifont

The clone is actually a LUN migration on the VNX, and we use rate with "high" for data migration.

We do not introduce rate "immediate" as it has impact to the host side IO according to related VNX documentation.

Can you provide more details of how you use the cloned volume?

BTW, we have asynchronous migration support which can extremely accelerate your use case:

OpenStack Docs: Dell EMC VNX driver

Note: above doc is for Pike release, you may need to refer to the correct version of doc accordingly.

Thanks

Peter

5 Practitioner

 • 

274.2K Posts

January 25th, 2018 01:00

Can you share the OpenStack/driver versions for your deployment?

We had removed the Configurable migration rate since Newton release, unfortunately the doc is not removed accordingly.


For your use case, you can simply create a bunch of volume from the template and attach them for VM use.


Thanks

Peter

5 Practitioner

 • 

274.2K Posts

January 25th, 2018 01:00

Hi gouzai !

Thanks for your answer !

I see on the documentation that we can use (not recommended) the ASAP parameter.

-----------------------------------------

Configurable migration rate

VNX cinder driver is leveraging the LUN migration from the VNX. LUN migration is involved in cloning, migrating, retyping, and creating volume from snapshot. When admin set migrate_rate in volume’s metadata, VNX driver can start migration with specified rate. The available values for the migrate_rate are high, asap, low and medium.

The following is an example to set migrate_rate to asap:

$ cinder metadata set migrate_rate=asap

------------------------------------------

We use the cloned volume to create a new volume with a template disk so ... It's no problem to use the ASAP parameter because the source volume it's only a template.

We test it and the maximum speed  (with any parameter on migrate_rate) always the speed is 65 MB/sg maximum.

Waiting for your answer ... Regards !

5 Practitioner

 • 

274.2K Posts

January 25th, 2018 02:00

Hi Peter ...

What you propose may be is an option in a "normal" environment, but this is an automated environment and the customer need to do it using Opensatack. The use of Navicli or Unisphere is not an option in this case. And the source volume is a pool of many diferent luns ... it's not an option to create a pools of pools (so many space required)

We also have a similar problem of performance when we create a backup using cipher ... the maximum speed is always near to 30 to 60 MB/sec ... for internal operation is too much for us, we expect values near to 100 or 200 MB/sec minimum

Thanks and regards !

5 Practitioner

 • 

274.2K Posts

January 25th, 2018 02:00

naviseccli_path = /opt/Navisphere/bin/naviseccli

volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver

@(#)Navisphere ./naviseccli Revision 7.33.9.1.55

cinder
config config

san_ip = "192.168.48.16"

san_secondary_ip= "192.168.48.17"

storage_vnx_security_file_dir = /etc/secfile/array1

storage_vnx_authentication_type = global

naviseccli_path = /opt/Navisphere/bin/naviseccli

volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver

destroy_empty_storage_group = False

initiator_auto_registration = True

storage_vnx_pool_names = PoolProd

io_port_list = A-1-0,A-3-0,B-1-0,B-3-0

iscsi_initiators = { "hudcct11":["192.168.128.11"],"hudcct12":["192.168.128.12"],"hudcct13":["192.168.128.13"],"hudccm11":["192.168.128.111"],"hudccm12":["192.168.128.112"],"hudccm13":["192.168.128.113"],"hudccm1 ":["192.168.128.114"],"hudccm15":["192.168.128.115"] }

force_delete_lun_in_storagegroup = True

max_luns_per_storage_group = 1100

storage_protocol = iscsi

use_multipath_for_image_xfer=True

OpenStack version is RHEL OSP 10 based on version Newton of Openstack

Thanks for your interest !

5 Practitioner

 • 

274.2K Posts

January 25th, 2018 06:00

What I meant was to use openstack cli/api for provisioning new cloned volumes, and attach them for the OpenStack instances.

For example, create a cloned volume from the template volume by:

cinder create --source-volid --name vol1 --metadata async_migrate=True

Instead of waiting for the completion of migration, the vol1 will be available for use after the migration starts on VNX.

Note above feature is only available in downstream VNX newton Cinder driver. GitHub - emc-openstack/vnx-direct-driver: VNX Direct Driver

Thanks

Peter

5 Practitioner

 • 

274.2K Posts

January 25th, 2018 08:00

Hi Peter ...

I will talk with the customer to evaluate this option.

I will keep in touch with you to inform about the results.

Thank you and regards !

No Events found!

Top