Start a Conversation

Unsolved

This post is more than 5 years old

82743

March 7th, 2013 18:00

Dell PowerEdge 1955 and 8GB Dimms of RAM...

Has anyone test an 8GB dimm of DDR2 ram in a Dell PowerEdge 1955?  I get that it isn't officially supported.  However much of the documentation appears to have been generated at the 1955's initial build levels.  Although there were not official version II or version III release of the 1955 - I know there were no fewer than 3 main system board iteration released during the 1955 lifecycle.  After reading about how the Intel 5300 series Quad Core CPU's behaved differently in the later motherboards iterations; I had to start wondering if those later revisions had also opened up an ability for the boards to accept the larger 8GB dimms.


We would welcome the opportunity to hear about others experiences in pushing the Dell 1955 blades beyond their documented bounds.

 

Regards,

NPA.

29 Posts

March 8th, 2013 05:00

Got your message NPA.

I haven't tried 8GB DIMMs in any of my 1955's... mainly because my blade chassis is a home use unit and I couldn't see needing 64GB on my Hosts, plus the 8GB DIMMs are insanely expensive. I currently have 8 blades with dual Quad Core CPUs and 32GB (8 x 4GB), the last two are dual Dual Core I use as standalone servers.

Maybe buy a couple DIMMs and try it out. Let us know.

March 8th, 2013 12:00

We have some 8GB Dimms on hand that we intent to start testing soon.  The PE1955 blade center is actually a recent acquisition for us.  We are still in the capacity planning stage and building it up.

Prior to purchasing the PE1955 blade center we used a mix of Dell 1U and 2U standalone servers.   We do a lot with Microsoft Hyper-V virtualization and leverage failover clustering, cluster shared volumes and Hyper-V’s Live Migration.  Our data and product implementations reside inside the virtual machines.  Physical server to us are just virtual hosts that we see as pools or RAM and CPU.  As such, we started phasing out some of our older PE 860's in order to accomplish greater resource density per U of server/rack space.  To achieve this we initially built up a few PE 1950's and PE 2950's.  Later when Microsoft introduce dynamic memory, it opened up an avenue to increase our VM density per host.  We seized the opportunity to take it a step further and reduced off peak hour running costs.  By over building one of our Host with extra RAM (and maximum redundancy) we were able to live migrate all the VMs onto it during off peak hours and shed the unneeded nodes.  That hardiest server was a PowerEdge 1950 G III with dual Quad core E5345’s and 64GB of ram (8)x 8GB Dimms.

Currently our PE1955 blade center has (10)x PE 1955 blades.  

Each Blade is populated with (8)x 2GB for 16GB total per blade.

Between what's in the PE1955's now and what we can commandeer from the existing PE 1950's / PE 2950's we have the below RAM inventory on hand:

-------------------------------------------------------------------------------------------

Dimm Size | # of Dimms in Inventory

-------------------------------------------------------------------------------------------

1 GB | 0

2 GB | 80

4 GB | 8

8 GB | 10

Our hopes were to upgrade (5)x blades to 32GB using the below configuration.

---------------------------------------------------------------------

Blade

---------------------------------------------------------------------

Slot# Dimm Size:

1 8  GB

2 4  GB

3 2  GB

4 2  GB

5 8  GB

6 4  GB

7 2  GB

8 2  GB

And then keep the other (5)x blades as currently configure with 16GB of (8)x 2GB

Not sure how plausible it is; however it be optimal as under that scenario we'd only need to by (2)x additional 4GB Dimms.

June 10th, 2014 15:00

I know this is an old thread.. but I just wanted to write back that I was able to put 2x8GB DIMMs in both a gen1 and gen2 1955 blade without issue.

Gen 1 is 2 cpu dual core, the Gen 2 was the later quad core CPUs.

Put 2 8GB dims in and each booted fine showing 16GB RAM.

After some further testing we'll probably try 8 of them and make sure we can get 64GB rather than the 32GB you get with 4GB sticks..       Now I'm wondering if there is any chance they can take 16GB DDR2 DIMMs.  Getting those old blades to 128GB with some SAS SSDs or 15Ks would make for some reasonable VM hosting.

29 Posts

June 11th, 2014 09:00

Threads on the Internet are never "old" or "outdated". Always useful information and your input is very  much appreciated.

With the 8GB DIMM's being recognized (at least one pair of them), that really does breathe new life into the 1955 series blades. With a couple Inifinband uplinks to core switches, the 1955 still has plenty of life left.

My next step will probably be RAM. All my blades are at 8 x 4GB now and I'd like to see 8 x 8GB. Although your test showed 2 x 8GB giving you 16GB, I still wonder if a total memory size limitation exists. Is 32GB the maximum regardless of individual DIMM size?

After RAM, I'll be moving away from individual 1Gb pass-through NIC's and into Infiniband trunk links.

As far as local storage, I use 146GB-15K SAS drives in RAID1, but they don't really do anything but boot the VMware ESXi Host. Any performance gains there would be worthless to me.

 

June 25th, 2014 02:00

I'm actually really glad you replied to this old thread. I also had a chance to do some additional testing with 8GB DIMMS of RAM and I had inadvertently failed to update this post with my finds. My testing was fruitful and I had some degree of success with a caveat. I was able to use 8GB DIMMS in my G2 Blades (Motherboard part numbers YW433). The caveat though was that the YW433 boards (the newest / last released revision) were ONLY compatible with Dual Rank DIMMS. My Dell 1950 GIII is maxed out with 8x8GB DIMMS but unfortunately most of the DDR2 DIMMS in that server were Quad Rank DIMMS. As the Dell 1955 (YW433) would not acceptable Quad Rank DIMMS I was not able to test pushing beyond the 32GB memory limit. My expectation would be that I would be able to push beyond the 32GB limit to 64GB so long as the DIMMS were all dual RANK. I will be buying additional Dual Rank DIMMS as opportunities present itself and I will report back my additional findings. As to whether you can use 16GB DIMMS, my expectation is not as I believe to achieve 16GB memory density on a single DDR2 DIMM, the RAM would have to be Quad Rank (however I am not infallible and would welcome being wrong in this case). Hopefully the continued insight here will benefit others who happen to stumbles across this.

29 Posts

February 9th, 2015 10:00

Been a while, but now that 8GB DIMMs are "reasonable", I was able to procure 8 x 8GB and perform some testing. All testing was done with the two newest 1955 motherboad revisions (as defined in another forum post around here).

First test was to populate the blade with 8 x 8GB:

Powered up normally and started the BIOS memory check...

I waited on this screen (with no movement on the progress bar) for a few minutes and decided I'd come back later. I returned about 45 minutes later and no change. I figured that was the end of that test, but just out of curiousity, I hit the space bar (to abort the memory test)...

It actually aborted and continued the boot process. I went into the BIOS and it showed (oddly enough) 63.25GB...

I had high hopes at this point, so I let it boot normally into ESXi 5.5. Unfortunately, it stopped at "Loading /sb.v00...". I figured I'd give it some time and left.

I came back about an hour later and it was still hung. I considered it a test failure. Since I reboot my Hosts from remote occassionaly, I really couldn't consider 64GB an option since it didn't even make it past the BIOS memory test, but I just wanted to check.

 

 

 

 

 

29 Posts

February 9th, 2015 15:00

I figured, since I was in the mood for testing, what about 48GB? I pulled the last two DIMMs out (leaving 6 x 8GB) and booted it up. It completed the BIOS memory test, but then complained...

I have always assumed the memory was addressed in pairs, but it must only do "binary" pairs... 2, 4, 8, 16, etc. Out of curiousity, I let it boot up. ESXi 5.5 came up fine and seemed to run normally.

I didn't like the un-optimized memory configuration, so I shutdown the Host and swapped the last 4 DIMMs for 4GB DIMMs (4 x 8GB and 4 x 4GB). Booted normally, BIOS memory check completed normally, and ESXi 5.5 came up normally. It has been running in this configuration for a few days now.

I guess I'll just run with 48GB Hosts. No big deal... just a home test environment anyway.

Hope this information helps someone.

July 22nd, 2015 19:00

i thank you for your update. i was wondering the same thing just now. 

October 21st, 2015 14:00

Hi there.   Thanks for the post, photos and testing.

Looking at your bios screen showing just under 64GB of RAM available... I wondered if the 1955 would work fine with anything under 64 GB.   I put in 6x8GB and 2x4GB and I have had no problems booting up, memory tests etc.   No boot message about incorrect memory config...      56GB of RAM will work pretty well.   I think we'll extend the life of our 1955s by filling them with 2TB SSDs and 56GB RAM.   Still good performance and plenty of space for VMs on Server 2012R2 HyperV.   I'm guessing VMWare would work too.    The blades are long in the tooth...  but for some tasks they still seem fine.

It looks like 56GB of RAM is the max you can get to boot in the 1955s.  This is the gen 2 dual quad core.  but I think the gen 1 works the same. I'll post back if I get a chance to test it.

29 Posts

October 22nd, 2015 15:00

Thanks for the follow up tests ScottWizard. I didn't think to test 6x one size and 2x another. Glad idea and thanks for posting the results.

I've been wanting to move to an m1000e chassis, but was unable to get one within my budget until recently. I found a chassis with 8 blades for a reasonable cost. All 8 blades are m600's in the same configuration... dual 3.0Ghz QC, 32GB RAM, two dual port 1Gb mezz cards (6 NIC's total), and dual SSD boot drives. I thought for SURE these would support 64+GB, but surprisingly, they behave the exact same way as the 1955's! They do pass the BIOS RAM test without interaction, but show 63.25GB. ESXi boots and runs normally, also without interaction. This was all on the newest BIOS (2.4.0).

What's odd is that I installed an m1000e chassis with a couple m905 blades for a company I used to consult for and those blades had 16GB x 8 DIMM's (128GB) and they worked like a champ. Showed all RAM available and had no issues. Maybe it has something to do with dual vs. quad rank as mentioned above.

Since there wasn't much change in the behavior, I only took one screen shot.

Since Dell has all but made the SUU ISO a self-booting image failure, I had to install Windows Server 2008 Standard so I could install the SUU and OpenManage. I only installed it once and moved the drive from blade to blade to update the various firmware components on each blade.

As seen in the screen shot, the RAM is correctly shown as 65536 (64GB), usuable is correctly shown as 32768 (32GB, Standard Edition), but oddly shown is 65280 (64GB - 256MB) as "maximum capacity". Why would Dell chop 256MB off the top if you install 64GB? Your inclination would be to expect the same loss at any installed capacity. Maybe it would use the 256MB for shared video RAM or something. But why only with 64GB?

I have to update the CMC firmware on the m1000e, then add it to the ESXi cluster. I think the vMotion compatibility will be the same, so I can just hot vMotion all my Guests over and prepare my 1955 environment for sale to the next owner.

October 27th, 2015 11:00

Thanks for all the information.   The fact that the M1000e chassis and blades can handle 10Gb connections for a cluster makes me think we may go the same way in the future.  

I ran a few more tests and found out that you can install all 64 GB of RAM if you enable the redundant memory options.  Both the spare and mirrored RAM settings in the BIOS allow the server to boot just fine.  SPARE takes one bank of 2 sticks out of production and sets it aside. If production RAM starts having too many errors, it will write out the RAM contents from the failing RAM bank to the redundant bank and let you keep running until you can replace the bad ram.      It cant' survive unreadable errors or actual failed RAM.   But you are only down 2 sticks out of 8.  So I saw 48GB of RAM when using the spare memory option.. with 16 GB set aside for redundance.

The other option I tried was mirroring. That is pretty much like RAID1 on your memory.  you lose half your RAM to redundancy, but you can survive complete failure of ram on either side of the mirror.     So, 64GB becomes 32GB of fully redundant RAM.   

I don't really need that redundancy.. So I'll be sticking with my 56GB of RAM.. which seems like the most you can fit into the 1955 blades.

to recap.

With 64 GB of RAM installed in a 1955 you can run it 2 ways.

32 GB of usable RAM in mirrored mode

48 GB of usable RAM in spare mode

and the maximum RAM it looks like you can boot a 1955 into an OS with is 56 GB

6x8GB and 2x4GB sticks.

I'll be running my 1955s with 56GB RAM and 2TB SSDs. I can get quite a few really good performing VMs onto that hardware.

June 17th, 2016 07:00

Hi.

Can you please share models and spexs of the memory you used?  (voltage,  brand,  models perhaps)

No Events found!

Top