Start a Conversation

Unsolved

This post is more than 5 years old

9005

February 15th, 2012 10:00

AX4-5i Disk Processor Enclosure Faulted

We have an older AX4-5i iSCSI san that used to be part of our VMware 3.5 production environment. We have recently decomissioned/moved all of the servers from the old environment and are left with this piece of hardware. I have been able to get the username/password/ip address to log into the device.

My plan were to use this AX4 as a storage device for our new ESXi 5 lab. So far everything I have seen looks ok on it except that upon logging into it I get the message "Disk Processor Enclosure - Faulted". Everything else checks out as normal.

I will admit Im pretty new to EMC and SANs in general but this seems like a good way to start to familiarize myself with them. My first question is what exactly does this error message mean and is it a showstopper?

I have disconnected all of the hosts from this SAN could this be the reason that it is showing the faulted state?

Any help or guidance is appreciated.

1 Attachment

4.5K Posts

February 15th, 2012 12:00

The two entires at the bottom of your scrren cap shows the Standby POwer SuppliesB and A as "Empty". These are the battery backup units that take the input the power in and provde the output to the Stoage Processors. These provide protection in case of power loss and protect the system cache from losing any data when you lose power. It appears that these have either been removed and the power plugged directly into the Storage Processors.

The array will work this way, but the errors are because you have no Standby Power Supplies connected to the array. If you run in this mode and lose power, all Writes that are currently in System cache will be lost and possible data loss will occur.

If you go to the following link, this will provide all the documentation for the AX4

https://support.emc.com/products/CLARiiONAX4-5

glen

61 Posts

February 15th, 2012 13:00

Just to add a bit to what Glen mentioned:

The array does "work" but performance will likely be suboptimal in this configuration.  With no SPS installed the AX4-5 will not enable write cache.  Meaning every write from your hosts to the AX4-5 will need to be completed to the disks before it is acknowledged back to the host.  With RAID you will generally see a performance penalty vs. DAS without write cache.  The persistent and proven safe write cache of the EMC midrange storage products a key base feature.  Do yourself a favor and install an SPS, you will be pleased with the difference between having one vs. not.

You probably have only one SPS, and I would guess it is not cabled up to the storage processor if unfamiliar folks did the move.

Installation details are on Powerlink:

Home > Support > Product and Diagnostic Tools > CLARiiON Tools > CLARiiON AX Series

"Installation, Maintenance, and Support for your AX4-5 System"

4 Posts

February 15th, 2012 13:00

Hopefully you are right and I have the SPS and its just not connected right. I will have to check it out because a quick google looks like a replacement SPS might be anywhere from 500 up. I think it may be tough getting my manager to approve that for a test environment.

4.5K Posts

February 15th, 2012 13:00

Totally forgot that without the SPS that the System Write cache will be disbled. With Write cach on the array disabled, bandwidth will be around 3-5MB/s.

glen

4 Posts

February 15th, 2012 13:00

So basically that error message is just stating that this AX is running in a non-approved configuration. I dont think that the backup battery/powerloss will be a major issue as this will only be used for testing/learning the new ESXi 5. I just wanted to make sure that there werent any serious issues with the device.

I will have to check next time Im down in the server room if we actually dont have the backups installed or if they were just bypassed.

1 Rookie

 • 

204 Posts

November 6th, 2013 12:00

the link to the AX4-5i documentation above appears to be broken, can anyone re-point to that manual?

4.5K Posts

November 6th, 2013 13:00

Try this one:

EMC CLARiiON AX4-5 support

glen

30 Posts

March 29th, 2014 10:00

Hello Glen,


first of all sorry for revamping an old thread, but I am currently struggling with an old AX4-5F FC array that used to be part of a customer's vSphere 5.0 production environment until the customer (Company A) filed for bankruptcy and, as a result, all service and maintenance contracts were ceased.


Another company (Company B) has recently purchased all ESXi hosts an the old AX4-5F FC array from Company A, so we have been asked to move them from Company A to Company B because the latter is planning to put them back into production.


When the AX4-5F array was still into production before moving it from Company A to Company B, I was aware that one of the SPS was already faulted. After moving it to Company B and firing it back up both SPS are marked faulted, while all other components look fine.


We are working with the customer and EMC in order to purchase two EMC-certified replacement SPS and renew all expired maintenance contracts, but in the meantime we need to perform some virtual machine backup operations as well as put some of them back into production as soon as possible.


With particular regard to virtual machine backup operations, we are actually seeing backup throughput around 3-5MB/s and all jobs report that the AX4-5F array is the bottleneck.


I completely understand that when all SPS are faulted the array Write cache will be disabled and that with Write cache on the array disabled there will be a performance decrease, however I never expected such a huge performance hit with bandwidth around 3-5MB/s. Is this by design ? Can someone please explain to me the reasons for such slow array performance when Write cache on the array is disabled compared to any standalone ESXi host equipped with local disks and write cache not turned on ?


Any help would be greatly appreciated.


Thanks and Regards,


Massimiliano

1 Rookie

 • 

204 Posts

March 31st, 2014 06:00

Massimiliano,

I can't explain why it would be THAT slow... Especially on an FC model.  That's crazy slow, slower than you should even get on a 10/100 iSCSI network.  Are you able to see which component of the AX4 is the bottleneck?  If there are processes like array initialization happening, that could slow things down a good deal, but even so it shouldn't be THAT slow :/  Someone with more knowledge can hopefully answer that question.  All i know is that it will run *some amount* slower without cached I/O (which as you know is disabled without having at least one working SPS)

I would like to add that you can find perfectly good refurbished SPS modules from third party vendors, for MUCH cheaper than you'd get them from EMC.

Good luck!

30 Posts

April 19th, 2014 09:00

Hello mtexter,


first of all thank you for taking the time to reply to my question. I have been very busy lately and, as a result, I have been unable to follow up on this issue.


After completing all virtual machine backup operations, I went to the customer yesterday in order to destroy old disk pools and virtual disks to create newer ones and to install ESXi 5.5 on all hosts. I created a new RAID 5 (6+1) disk pool using Drive Slots 6 to 12, leaving the Vault Drives (1st 4 drives) unconfigured and assigning Drive Slot 5 as hot spare (DPE has 12X 300GB 15K SAS drives). Then I created a new 800GB LUN using the vSphere Client and formatted it as VMFS Datastore.


I have performed a test restore of a VM previously backed up using VeeamBackup & Replication v7 and the restore speed was around 11-12MB/s. Then I have performed a test backup of the restored VM and the backup speed was a steady 56MB/s. Both during the test restore and backup, VeeamBackup & Replication v7 used Network Mode on a Gbit link.


My feeling is that only write performance is crazy slow compared to read performance. Write performance seems just capped somewhere. I created a SR yesterday and got SP Collects from both the SPs analyzed from EMC Technical Support and they only found both the SPS are faulted.


I am writing this not to complain because there is a write performance impact when the System Write cache is disabled (I fully understand that when both the SPS are faulted the array is not operating under normal conditions), I would only like to know how bad the write performance impact is expected to be. Basically, I would like to know if the 11-12MB/s write speed I am getting is normal under these conditions.


Can someone with more knowledge or from EMC please chime in ? We need to put some virtual machines back into production as soon as possible and unfortunately we have not yet received the quote from EMC in order to purchase two EMC-certified replacement SPS and renew all expired maintenance contracts.


Any help would be greatly appreciated.


Thanks and Regards,


Massimiliano

4.5K Posts

April 21st, 2014 07:00

When the SP write cache is disabled due to fault on array (SPS faulted), the Write performance on the array will be very low - the 11-12MB/s that you're seeing sounds correct. It could be a bit higher or lower depending on the load on the disks - without Write cache, the drives are the limiting factor in both read and write performance.

glen

2 Posts

December 11th, 2016 03:00

Restart of Faulted Storage Processor resolved my problem.

Caution: Before restart verify the Power Path Connectivity from Servers.

7.JPG.jpg

No Events found!

Top