Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

23129

March 20th, 2014 13:00

Round Robin Upgrade

Hello,

I am going to be upgrading some PS 6510's 3 to be exact. I have a upgrade pool which one sits in now and then the other two are sitting in the standard pool. I need to upgrade the one in temp and then move the volumes off of one of the other ones and onto the one that is in temp.Then move the newly vacated one into temp for upgrade and then repeat that with the last remaining array. My Volumes Distribution is accross the 2 arrays in stadard and the load balacning is 'enabled' Can I not pick and choose what I want to send where? If not, how invasive is the auto load balacing to end users? Can I turn off the load balacing? These are view vm's and some other stuff. If I cannot pick and choose, Im going to have to update, then move an array into the standard pool, let it auto balacnce and then move is out to the upgrade pool. and then do that for the last one? Does this make sense? Anyone have any better experiece with this upgrade path?

5 Practitioner

 • 

274.2K Posts

March 20th, 2014 18:00

Hello,

This is known as a rolling upgrade.  it's not my preferred method of upgrading firmware, especially with 65xx since they are so large and it takes so long.   Plus IMHO, it just adds to the wear on the drives.  

Instead of moving volumes, after the upgrade / restart, move the temp pool member back into the production pool.   Let it start balancing,  you will see it on the volume status, data will start moving to the new member.  Then you can remove another member and move it to the temp pool.  

Then repeat the process.   What versions are you starting with and what version are you going to stop at?

You could technically bind a volume to a single member, how that will cause alot of data movement then you'll have to undbind it later anyways.    Not matter what data will be shifted around.

Here's my thought about firmware upgrades, its how I have always done them for the last 10 years.

 1.)  Make sure that all the servers and switches are configured to best practices.  Disktimeout values, login_timeout,  portfast on switch ports, etc..   This will allow you to not only ride out a firmware upgrade, but also a controller failure followed by failover. 

2.)  Verify that all ports on both controllers are cabled and in correct switch ports.  Many failed upgrade calls are caused by that.   Reboot to install new FW and array goes "dark"   (Which leads to #3)

3.)  I prefer to use serial port CLI method to perform all upgrades.  In fact I use a terminal server so that all CMs are connected.   This allows me to very closely monitor the entire process.  The side benefit is, if there were a problem you are already connected and ready to triage with support.  Things like a bad flash card or no cables.   One command shows you that there are no active ports.   In fact I go one step farther and log the upgrade sessions to a file.  Again if something happened you can give that to support to speed up troubleshooting

4.)  To minimize risk I like to do upgrades during low I/O periods whenever possible.  If the arrays are very busy then the failover process will take longer and could result in server errors.  Especially,  when the disk timeouts are kept at default.

Regards,

 

 

 

37 Posts

March 21st, 2014 15:00

Will this cause issues with the 2 Arrays being at 6.0.1 and one being at 7.0.1? The 2 arrays will be at 7.0.1 then one at 6.0.1 before the upgrade? I was told that 6.0.8 is required before going to 7.0.1? Will this put the Cluster at risk? If you have any supporting documentation for this that would be great!

37 Posts

March 21st, 2014 15:00

Thank you for the update. Looks like were going to create another pool and shift it down into there. We will then be able to move the volumes around in the method we wanted to do with the memebers. We will also be going from 6.0.5/6.0.1 to 6.0.8 then to 7.0.1.

5 Practitioner

 • 

274.2K Posts

March 21st, 2014 15:00

Hello,

If you are already at 6.0.x you don't need to go to 6.0.8 first.   Any revision of 6.0.x can go directly to 7.0.2.

37 Posts

March 21st, 2014 16:00

I have been working with a Dell Representitive.

5 Practitioner

 • 

274.2K Posts

March 21st, 2014 16:00

You don't want to go to 7.0.1, go to 7.0.2.  There is a fix there for potential problems with 6.0.x and 7.0.x members in same group when offload functions like VAAI in ESX and ODX in Windows 2012 are in use.

EQL firmware has what are known as compatibility levels.   Any 6.0.x group will allow a 7.0.x member to be joined to it.  It's doesn't look at specific maint revisions.  Since maint revisions don't change the FW compatiblity level number.  

Where did you hear you had to go to 6.0.8 first?  

For clusters there's a note in the FW upgrade strongly suggesting putting the cluster into maint mode before firmware updates.

37 Posts

March 21st, 2014 17:00

Also, on a side note I will have to do a SAN HQ fresh build out as one of my PS100's is stuck to the 5.0.4 rev. This was not communicated to me as well that part of the upgrade would need to have a SAH HQ 3.0 to have 7.0.2 work.

4 Operator

 • 

9.3K Posts

March 22nd, 2014 18:00

Your PS100 should be old enough that it is completely out of warranty. It shouldn't really be used in a production (critical) environment.

Systems like that are great for test environments, backup staging (disk to disk to tape) and things like that.

37 Posts

March 24th, 2014 12:00

Thanks everyone!

No Events found!

Top