As above really, I posted a recent item about the CX300 which was gratefully replied to. I have spoken to the powers that be, and we have decided to stick with the CX300 for a little while longer, and we want to be on the HCL for VMware so we have agreed to use 3.5i U5 hosts until we upgrade the SAN...
As mentioned in the other post, I have now applied for a Powerlink account. I have asked for it to be upgraded, and have received an email from EMC to say its been done, but I still can not see anything above a limited account?? My issue now of course is that I have no manuals whatsoever at the moment...
From the main Navisphere console I can see various headings... it looks like you create a RAID group, then create a LUN with that RAID group, then assign that LUN to a storage group??
As far as I understand it, for vSphere and above, all I need to do is install the host, set up switch zoning, and then it will auto-populate its storage initiator records into the SAN. At which point I simply put it into a storage group and re-scan?
The issue I have of course is that I am running 3.5i... I have seen mentions of a navisphere agent, manual registrations etc etc etc... I have no idea...
Also my next question is regarding multipathing...
I believe because I am using 3.5i I am limited to the three VMware PSP modules of fixed, MRU, and RR... I believe MRU is the one I should be using...?
Again when I make it vSphere... with a newer array I can utilize the PowerPath/VE PSP packs too?
As you can see... lots of questions... appreciate any help!
I have two old CX300 in my lab environment running against 8 ESX / ESXi hosts (3.5 to 4.1U1).
If you have access to the graphical Navisphere Access via a web browser the setup is pretty easy.
You need to:
1) Define a Raid Group
2) Define a Storage Group
3) Bind a LUN to the Storage Group
4) Bind the hosts to the Storage Group
Rescan devices and if all is fine you should see your new LUN. Once in ESX/vSphere click Add Storage and create a new VMFS LUN for your storage.
You should use Fixed or MRU with a CX300, RR is not necessarily faster on the CX, clearly depending on your LUN / VM setup.
Hope this helps
>>You should use Fixed or MRU with a CX300, RR is not necessarily faster on the CX, clearly depending on your LUN / VM setup.
Not quite through
EMC only recommend Fixed and Round Robin (fixed is preferred) for CX4's
MRU and Round robin is preferred for CX3 and older
Okay Geraold, point taken...
It's been a while and I thought Round Robin was supported CX3 onwards, but not for the CX300. Also my tests with RoundRobin on the CX300 were not showing a better, more a worse performance compared to Fixed/MRU.
I don't really get why Fixed shouldn't be used over MRU, basicly from a VMware perspective they basicly do the same in a failover scenario, the big difference is more in the fact that when you start an ESX host it might pick a different path when you select MRU.
If you use MRU, you might end up (if Murphys Law hit's you) with a scenario where all traffic goes down one path, as that might (again this is more a random/worst case point of view) be the same most recent for all.
Anyway the CX300 works fine with ESX, but it is a bit old fashioned and I would not necessarily take it for production.
On the other hand I had people trying to use ReadyNas devices in production and eventhough they are supported, I would then prefer a CX300 instead of a ReadyNAS, as that clearly gives you more performance and scalability still (however it might not be supported anymore).
But that's only my two cents....
Not trying to hijack the thread but I have a question. Comdivisionys, you mentioned that you were running a CX-300 in an environment that had 3.5 through 4.1U1. How was that working out? I'm guessing you were running the different versions of ESX on the different hosts but had the VMs that were on the SAN on the lowest version (3.5)?
I have a CX-300 that I was going to use in a 4.1U1 environment after an upgrade from 3.5U5 but after checking the VMware HCL it sounds like it might not work. I'm going to see what the latest Flare version I can put on my CX-300 is just in case this is the limiting factor on the 4.1U1 upgrade but would like to hear more about your setup.
the point that a system is not shown on the HCL does NOT mean it doesn't work anymore, it means it is NOT supported. We run them in a lab environment for quite some years without any problems, however this is a lab environment. On a full production environment I would be carefull, NOT supported means you are maybe alone if something goes wrong.
At the moment I have 8 ESX hosts, 2 are on 3.5 and the remaining 6 are all 4.1U1, all are connected thru 2 Brocade Switches to two different CX300 with latest Flare (the last one which was available)...
So far I have never had any issues with the CX300, neither have any customers I know off which still run those types in there environment, however in many cases people took the CX300 for there LAB/TEST Environments, which means they do not expect production down time.
I hope this helps, if you have further questions, feel free to contact me...