Start a Conversation

Unsolved

This post is more than 5 years old

1718

October 27th, 2011 03:00

iSCSI and Storage Pool Setup

Hi!

We got our new VNX 5300 Unified (file and block) system in house a couple of weeks ago.

I have to problems with the setup and hope someone here could answer them.

iSCSI setup through Datamover to VMware

We have a vSphere 5 system we want to connect to the VNX. On the Datamover I have a 4 x 1gbe interface in eatch DM. These are configured with one IP address.

In the ESX servers i have configured the VMKernel network and is trying to discover the iSCSI targets on the VNX, but nothing is appearing.

What am I missing here? I have block LUNs avaiable but they does not appear in the iSCSI software storage adapter on the ESX servers.

Vault Drives and Storage Pool

How do I add the first four drives to a Storage Pool? (the drives where the OS is installed on).

I can only greate a normal RAID with these drives, but i want to add them to a Storage Pool.

Thanks in advance for all the help I can get on this.

115 Posts

October 27th, 2011 03:00

The Datamover is for CIFS and NFS traffic, for iscsi you would setup the iscsi connections on the block part of the VNX

There should be iscsi connections on the 2 sp's if you ordered them with the iscsi modules

P

115 Posts

October 27th, 2011 04:00

not sure about the vault drives, i wouldn't recommended putting them in a pool anyway each vault drive gives up a portion of space to the internal system LUN where the OE is installed.

Its possible beacuse they are vault drives and they don't have the same amount of space as the other pool drives the system won't allow them in

i would usually leave the vault drives in a raid group on they're own and NOT use them for any production luns.

115 Posts

October 27th, 2011 04:00

everything is done through unisphere (mostly)

create your pool and luns and create storage group for your vmware farm

you should be able to see the SP iscsi interfaces and give each iscsi interface an ip address and your vmware servers connect to these and then you add the vmware server into the storage group with all your luns

for cifs and nfs usually you would connect the gb connections to your switches using a LACP group and you can configure virtual interfaces on this lacp group and then configure file systems for your CIFS and NFS shares

Have you worked on clariion or celerra before ?

4 Posts

October 27th, 2011 04:00

I will assume our sales guy has misconfigured the system.

I am missing the GbE modules in both SP's.

And I still assume I can use regular ethernet equipment with the iSCSI modules in the SP?

Would it be the same configuration steps like the NICs on the Datamover?

4 Posts

October 27th, 2011 04:00

And the vault thing sounds correct since the drives are "smaller" due to the OE.

Just annyoing that the system has been configured with 2TB drives in the vault.

They will be more or less "useless" in our environment with Fast Cache and Auto Tiering.

Have anyone some secound opinion on this?

4 Posts

October 27th, 2011 04:00

Nope.

First EMC system.

The NFS and CIFS part is already working like a charm.

I have problem with the block part, but now wonder when I am not able to initiate iSCSI connections through the DM.

1 Rookie

 • 

20.4K Posts

October 27th, 2011 05:00

if you must use iSCSI through the datamovers, you can request an RPQ through your EMC team and  iSCSI can be re-enabled (it used to be available on older Celerra models).  The goal is to get away from using datamovers for iSCSI because of the limitations (2TB max LUN size is one of them).

No Events found!

Top