We got our new VNX 5300 Unified (file and block) system in house a couple of weeks ago.
I have to problems with the setup and hope someone here could answer them.
iSCSI setup through Datamover to VMware
We have a vSphere 5 system we want to connect to the VNX. On the Datamover I have a 4 x 1gbe interface in eatch DM. These are configured with one IP address.
In the ESX servers i have configured the VMKernel network and is trying to discover the iSCSI targets on the VNX, but nothing is appearing.
What am I missing here? I have block LUNs avaiable but they does not appear in the iSCSI software storage adapter on the ESX servers.
Vault Drives and Storage Pool
How do I add the first four drives to a Storage Pool? (the drives where the OS is installed on).
I can only greate a normal RAID with these drives, but i want to add them to a Storage Pool.
Thanks in advance for all the help I can get on this.
The Datamover is for CIFS and NFS traffic, for iscsi you would setup the iscsi connections on the block part of the VNX
There should be iscsi connections on the 2 sp's if you ordered them with the iscsi modules
not sure about the vault drives, i wouldn't recommended putting them in a pool anyway each vault drive gives up a portion of space to the internal system LUN where the OE is installed.
Its possible beacuse they are vault drives and they don't have the same amount of space as the other pool drives the system won't allow them in
i would usually leave the vault drives in a raid group on they're own and NOT use them for any production luns.
I will assume our sales guy has misconfigured the system.
I am missing the GbE modules in both SP's.
And I still assume I can use regular ethernet equipment with the iSCSI modules in the SP?
Would it be the same configuration steps like the NICs on the Datamover?
everything is done through unisphere (mostly)
create your pool and luns and create storage group for your vmware farm
you should be able to see the SP iscsi interfaces and give each iscsi interface an ip address and your vmware servers connect to these and then you add the vmware server into the storage group with all your luns
for cifs and nfs usually you would connect the gb connections to your switches using a LACP group and you can configure virtual interfaces on this lacp group and then configure file systems for your CIFS and NFS shares
Have you worked on clariion or celerra before ?
And the vault thing sounds correct since the drives are "smaller" due to the OE.
Just annyoing that the system has been configured with 2TB drives in the vault.
They will be more or less "useless" in our environment with Fast Cache and Auto Tiering.
Have anyone some secound opinion on this?
First EMC system.
The NFS and CIFS part is already working like a charm.
I have problem with the block part, but now wonder when I am not able to initiate iSCSI connections through the DM.
if you must use iSCSI through the datamovers, you can request an RPQ through your EMC team and iSCSI can be re-enabled (it used to be available on older Celerra models). The goal is to get away from using datamovers for iSCSI because of the limitations (2TB max LUN size is one of them).