We have an older AX4-5i iSCSI san that used to be part of our VMware 3.5 production environment. We have recently decomissioned/moved all of the servers from the old environment and are left with this piece of hardware. I have been able to get the username/password/ip address to log into the device.
My plan were to use this AX4 as a storage device for our new ESXi 5 lab. So far everything I have seen looks ok on it except that upon logging into it I get the message "Disk Processor Enclosure - Faulted". Everything else checks out as normal.
I will admit Im pretty new to EMC and SANs in general but this seems like a good way to start to familiarize myself with them. My first question is what exactly does this error message mean and is it a showstopper?
I have disconnected all of the hosts from this SAN could this be the reason that it is showing the faulted state?
Any help or guidance is appreciated.
The two entires at the bottom of your scrren cap shows the Standby POwer SuppliesB and A as "Empty". These are the battery backup units that take the input the power in and provde the output to the Stoage Processors. These provide protection in case of power loss and protect the system cache from losing any data when you lose power. It appears that these have either been removed and the power plugged directly into the Storage Processors.
The array will work this way, but the errors are because you have no Standby Power Supplies connected to the array. If you run in this mode and lose power, all Writes that are currently in System cache will be lost and possible data loss will occur.
If you go to the following link, this will provide all the documentation for the AX4
So basically that error message is just stating that this AX is running in a non-approved configuration. I dont think that the backup battery/powerloss will be a major issue as this will only be used for testing/learning the new ESXi 5. I just wanted to make sure that there werent any serious issues with the device.
I will have to check next time Im down in the server room if we actually dont have the backups installed or if they were just bypassed.
Just to add a bit to what Glen mentioned:
The array does "work" but performance will likely be suboptimal in this configuration. With no SPS installed the AX4-5 will not enable write cache. Meaning every write from your hosts to the AX4-5 will need to be completed to the disks before it is acknowledged back to the host. With RAID you will generally see a performance penalty vs. DAS without write cache. The persistent and proven safe write cache of the EMC midrange storage products a key base feature. Do yourself a favor and install an SPS, you will be pleased with the difference between having one vs. not.
You probably have only one SPS, and I would guess it is not cabled up to the storage processor if unfamiliar folks did the move.
Installation details are on Powerlink:
Home > Support > Product and Diagnostic Tools > CLARiiON Tools > CLARiiON AX Series
"Installation, Maintenance, and Support for your AX4-5 System"
Hopefully you are right and I have the SPS and its just not connected right. I will have to check it out because a quick google looks like a replacement SPS might be anywhere from 500 up. I think it may be tough getting my manager to approve that for a test environment.
Totally forgot that without the SPS that the System Write cache will be disbled. With Write cach on the array disabled, bandwidth will be around 3-5MB/s.
first of all sorry for revamping an old thread, but I am currently struggling with an old AX4-5F FC array that used to be part of a customer's vSphere 5.0 production environment until the customer (Company A) filed for bankruptcy and, as a result, all service and maintenance contracts were ceased.
Another company (Company B) has recently purchased all ESXi hosts an the old AX4-5F FC array from Company A, so we have been asked to move them from Company A to Company B because the latter is planning to put them back into production.
When the AX4-5F array was still into production before moving it from Company A to Company B, I was aware that one of the SPS was already faulted. After moving it to Company B and firing it back up both SPS are marked faulted, while all other components look fine.
We are working with the customer and EMC in order to purchase two EMC-certified replacement SPS and renew all expired maintenance contracts, but in the meantime we need to perform some virtual machine backup operations as well as put some of them back into production as soon as possible.
With particular regard to virtual machine backup operations, we are actually seeing backup throughput around 3-5MB/s and all jobs report that the AX4-5F array is the bottleneck.
I completely understand that when all SPS are faulted the array Write cache will be disabled and that with Write cache on the array disabled there will be a performance decrease, however I never expected such a huge performance hit with bandwidth around 3-5MB/s. Is this by design ? Can someone please explain to me the reasons for such slow array performance when Write cache on the array is disabled compared to any standalone ESXi host equipped with local disks and write cache not turned on ?
Any help would be greatly appreciated.
Thanks and Regards,
I can't explain why it would be THAT slow... Especially on an FC model. That's crazy slow, slower than you should even get on a 10/100 iSCSI network. Are you able to see which component of the AX4 is the bottleneck? If there are processes like array initialization happening, that could slow things down a good deal, but even so it shouldn't be THAT slow 😕 Someone with more knowledge can hopefully answer that question. All i know is that it will run *some amount* slower without cached I/O (which as you know is disabled without having at least one working SPS)
I would like to add that you can find perfectly good refurbished SPS modules from third party vendors, for MUCH cheaper than you'd get them from EMC.