Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

4118

June 18th, 2015 21:00

volume mappings to initiators.

Hi, I'm having a problem exposing the scaleIOs to our ESX hosts.

I have created the volume called "ScaleIO-Dev"

I have checked our VMware host configuration and enabled the iSCSI iniator as we would and as we do for hooking up to our existing ZFS based SANs.

The name of our internal server itself is; dnzakesx-backup01.dsldev.local so logically the iqn name according to the standards should be something like;

iqn.2015-06.dsldev.local:dnzakesx-backup01

when I try and and map the volume to a scsi_initiator;

scli --map_volume_to_scsi_initiator --volume_name ScaleIO-Dev --initiator_name iqn.2015-06.dsldev.local:dnzakesx-backup01 --mdm_ip a.b.c.d

I get a message back;

Error: initiator_name too long: 'iqn.2015-06.dsldev.local:dnzakesx-backup01'

when I shorten the iscsi initiator name to something like say; iqn.2015-06:backup01

Error: MDM failed command.  Status: Could not find the SCSI Initiator

Why am I getting an "initiator too long" on a perfectly valid DNS name?

Are there any DNS lookups/reverse DNS lookups taking place as part of the connection to validate the iqn name?

Why am I not able to map the volume to the iniator?

If it helps I'm running the 3xCentOS7 VMs with 3 NICs each - one the management nic, the other 2 for the iscsi traffic (on different vlans).

cheers

Ashley

June 22nd, 2015 23:00

Hello Ashley, you still can present Scaleio storage built on CentOS hosts to VMware host(s) without iSCSI and NFS. Scaleio has native SDC (Storage Data Client) for ESXi implemented as a VIB extenstion, which effectivelly is a storage initiator (client) that communicates with storage targets (servers). It is the front-end storage protocol and multipathing driver instead of iSCSI/NFS and NMP.


While the standard installation is converged where everything is install symetrically using scripted wizzards, you can decouple the system in many ways...

522 Posts

June 19th, 2015 05:00

I'll take a stab at this since I read it as you trying to map an SIO volume from your SDS pool through native iSCSI to your ESX host. If this is the case, I tested this with the 1.30 version of SIO and it worked fine, but when I queried about its support, I found it to be unsupported (even though it works). If this is what you are trying to do and I remember correctly, I had to use the command to add the new iSCSI initiator (add_iscsi_initiator) and then map the volume to the newly added iSCSI initiator. In the 1.32 user guide, I see references to some iSCSI support being deprecated and the section on iSCSI initiators (that you might be reading in your 1.30 guide) to have been removed. I inquired about multipathing support when I was testing it a while back and that is when I was told it wasn't supported, which I thought was too bad.

Not sure if anything has changed with this formal support for the provisioning above, but I will let product management chime in in case I am incorrect. If I am reading you question wrong as well...feel free to let me know

5 Practitioner

 • 

274.2K Posts

June 21st, 2015 01:00

Hi,

Why are you even using iSCSi? iSCSI is deprecated and support for it will be removed soon (if it isn't removed already).

To expose devices to ESX, install our SDC vib in the ESX itself. This solution is more robust and gives much better performance.

Thanks,

Eran

13 Posts

June 21st, 2015 17:00

thanks guys for your answers.

Just to be clear we were hoping to trial out an instance of ScaleIO on one of out whitebox SupoerMicro based converged units running vSphere 6 and currently OmnioOS as a VM presenting storage back to the host itself to act as a converged storage unit (currently used as a backup taget).

As ScaleIO requires a minimum of 3 nodes, I can not make use of the ScaleIO plugin at this stage on that single host, so I deployed 3xCentOS VMs which I then deployed the ScaleIO components onto, but it appears that regardless of what I do this functionality no longer works under 1.32 to a vSphere6 host (even on  a single nic configuration). One of the issues seems to be that the dynamic discovery of the iscsi targets doesn't seem to work as expected.

I tried to manually create the initiator representing the client;

scli --add_scsi_initiator --initiator_name backup01 --iqn iqn.2015-06:backup01

and then map that with;

scli --map_volume_to_scsi_initiator --volume_name ScaleIO-Dev --initiator_name backup01 --mdm_ip a.b.c.d

Successfully mapped volume ScaleIO-Dev to SCSI Initiator backup01 with LUN number 0

but then on the ESX host with the iqn name set to be iqn.2015-06:backup01 I'm unable to scan the storage even thought the iscsi configuration is bound to that vmkernel interface.

If there is no multi-pathing support in general for the iscsi target, this severely limits its use to be honest, compared to all other iscsi targets I have worked with.

It quickly gets to the point where we might as well just either stick with ZFS based storage converged units, or we run with a  manual configuration of Linux IO target (LIO) on top of an open distributed file system.'

I don't understand the logic in removing iscsi presentation to VMware hosts - surely software defined storage should have the flexibility to be deployed in any method the client wishes even if the performance is obviously not optimal.

5 Practitioner

 • 

274.2K Posts

June 22nd, 2015 00:00

Hi,

-I understand your frustration about lack of iSCSI support. I agree that for several use cases, using ZFS storage with iSCSI makes sense. Nevertheless, ScaleIO offers capabilities that gives you robustness, elasticity, high availability/redundancy, scalability and performance that cannot be easily achieved with ZFS storage.

-Now, having finished my marketing pitch, I would like to offer some history lesson about ScaleIO and ESX:

  In the beginning of time, when ScaleIO was a startup, there were only two possible ways to expose storage to ESX:

      1. iSCSI. 2. NFS

  iSCSI was chosen as it was block oriented, and had some support for multipathing which alleviated the inherent bottleneck in running a NFS server on one node. I then wrote a multi-pathed iSCSI target implementation. The multi-pathing ability could scale theoretically to hundred of paths. In reality, we usually pointed each ESX to two paths only (the local VM, and one remote).  While this solution worked, it had many downsides:

   -Configuring it was a pain. Some of the configuration pains were removed with our vSphere plugin, but not all of them.

   -Performance: The performance of iSCSI stack wasn't good (both initiator and target are to blame). In addition, the architecture forced an additional network hop (iSCSI initiator -> iSCSI target) that increased latency compared to native ScaleIO I/O path

   -Robustness: While ESX multipathing stack works, it doesn't like events of path disappearance. With ScaleIO, every reboot caused a path to disappear, and it was sometimes a pain to convince ESX to reclaim failed paths.

In the beginning of 2014, we  endeavored to write our own native SDC to ESX to remove the iSCSI dependency, as we believed that iSCSI is only a temporary solution with many shortcomings (most of them described above). As of version 1.30, we have a native SDC solution, and with 1.32, iSCSI isn't supported anymore.

-The reason you don't see our iSCSI target, is that you don't have the iSCSI target rpm which is reponsible to expose the targets. This rpm is now not supported any more and isn't available.

I do encourage you to try and install the native SDC and see how it works for you. I believe you will be surprised by it and will wonder how come you stayed with iSCSI for so many years

One last comment. ScaleIO is inherently multi-pathed (In fact, it uses much more paths to the storage than anything in existence today). The SDC talks to all the relevant SDS nodes directly all the time and you can configure more than one IP per SDS, thus achieveing multi-pathing in the path between the SDC to each SDS as well

Thanks,

Eran

13 Posts

June 22nd, 2015 02:00

Thanks Eran,

What you say makes perfect sense and I do believe what you and your team have created has the potential for massive disruption in the storage space.

However in terms of iscsi, the reason we have chosen to use it up to now has always been is lack of vendor lock in and its flexibility and the fact it is OS agnostic. That flexibility is important to us in any product that we chose to deploy in our environment particularly in the cloud era. At the moment we are a VMware shop but even that could potentially change going forwards.

Our tier1 storage up to now has always been supplied by FC connected SANs but I was looking at technology to help us reduce our dependency on fibre channel and to give us greater flexibility and scale out capabilities as well as reducing costs.

While we'd love to try ScaleIO in the way its designed to be deployed in a VMware environment, we can not work on business case until we can understand more about the quirks of the product and I was hoping to become familiar with the management and operational framework without having to run the product up inside nested vsphere 6 hosts.

I don't fully understand how the scaleIO product can backend storage to other deployment types like Xen/HyperV/any other etc without the storage being presented as iSCSI (unless you are running similar native drivers on those platforms as well?) - and if that is the case I don't understand how iSCSI support can be removed from the product.

Another big problem is that most of the information on the net is referring to versions prior to 1.32 so a lot of people are going to hit the same sorts of frustration as we are. I may have missed something but I can't recall seeing any information about the iSCSI target functionality being removed from 1.32. It would be easier if the iSCSI functionality could be made available in 1.32 with a warning that the style of deployment is not supported or recommended particularly with reference to VMware.

I'd love to be able to run my own benchmarking tests comparing the native SDC solution to the same product except running as an iSCSI target - but this is not possible under 1.32.

cheers

Ashley

5 Practitioner

 • 

274.2K Posts

June 22nd, 2015 04:00

Hi,

We have clients for LINUX and for NT which should cover all your interop conerns (unless you aim for HPUX/Solaris or other esoteric Unixens).

I understand the confusion about iSCSI and can only apologize for that. I do urge you to try the native solution.

BTW, in your 3 CentOS config, you can install an SDC in one of these Linux VMs, expose a volume to it and run against it directly.

Thanks,

Eran

13 Posts

June 22nd, 2015 20:00

Thanks Eran, The lack of iSCSI surely means that although the storage can be presented by CentOS hosts, it can't be presented to VMware hosts as that would need to be via ISCSI or NFs (if the SDC can't be run under VMware - our use case as we only have a single host for the POC) - unless we run standard LIO over a ScaleIO mounted volume.

Are there any technical reasons why iSCSI couldn't be re-introduced back into the product architecture as an option - or is the decision linked to strategy?

We'll have a discussion this side to see if there is a way we can proceed but suspect it's going to be a long process.

cheers

Ashley

13 Posts

June 23rd, 2015 02:00

thanks guys, this is great and gives a variety of deployment options that should help everyone.

Bearing in mind most VMware shops will be like us and use FC attached storage and/or iSCSI or NFS storage, it would be great if there was a post on the net outlining the process to standup 3x Centos7 VMs and to confgure the SDC vib extension on VMware to connect to this. Many people including ourselves will not be familiar with the deployment flexibility of ScaleIO and this is one of its strong points IMHO.

This would be particularly useful while the net is flooded with old releases of ScaleIO setup refering specifically to the iSCSI drivers (which as we know don't exist in 1.32).

No Events found!

Top