I am a newbie to the world of blades and as we look ahead to the next build out I want to see if it is worth exploring. Out environment runs a lot application servers on PE1950 - the faster the chipset better. We are seeing best utilization from the the quad code cpu model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz cpu MHz : 2660.005
Memory on a typical server is 8G. From looking at the M1000E it seems to offer a compelling value proposition fitting 16 server in a 10U profile;
1. is the power consumption proportionately lower than deploying 16 x PE 1950? 2. which of hte io subsystems will be shared among the guest nodes on the chassis ? these are web servers so network is a valued resource.
Recommendations/ insights, experiences would be great to hear.
The power consumption of a blade chassis with 16 blades is lower than 16 1950s - assuming that you compare comprable configurations. I haven't found any single whitepaper or study that compares the 1000e blades chassis and M600 blades with a 1950, but by looking at the numbers from two different studies I think you can get a general idea.
In this study on blades efficiency - http://www.dell.com/downloads/global/products/pedge/en/pe_blades_specjbb2005.pdf - a full chassis under pead load consumed 3524 watts or aprox 220 watts per blade.
In a study that Scott did on Virtualization workload consolidation and the power savings that you can achieve he measured the Power Consumption of a 2950 under high load at 440 Watts.
So even though these two numbers are with different workloads, I think that you can draw a general conclusion that blades are more effiecient.
In regards to your other question - The I/O modules that you put in the back of the chassis will have a great deal to do with it. There are options for PowerConnect and Cisco switches as well as Ethernet and Fibre Channel pass-thrus. So you can have some options including no sharing with the pass-thrus. Although most want to use the switches to reduce cable and port costs.
Another resource that can be used to estimate power consumption is the Dell datacenter capacity planner - www.dell.com/calc. You can go to this tool and do a configuraiton of a full blade chassis vs 16 1950s with simliar config. I've done it a few times and got simliar numbers to what I had in my initial reply.
All of the energy consumption numbers are highly dependent on configuraiton. This tool was created to help everybody understand how differences in configuration and form factors can effect things like power consumption.
1. thanks for the link on dell power consumption calc - that matches what i have observed on our 1950 used as web servers - 400Watts 2. since virtualization is not my target right now - maybe the M1000e blade solution might not be worth exploring.
On the application server using the PE1950 - using E5430 chipset with 16G RAM, 2 x 1TB SATA drives - the system is already being driven to 80+ % cpu utilization with a noticeable io_wait; I was looking at the M1000 as purely a way to get space and power optimization with the same throughput as the 1U PE1950 - with shared io channels [which make sense for virtualization] this might actually be sub-optimal for me to run current apps.
is that a correct interpretation of your responses.
I only used virtualization studies to show the power consumption numbers - becusae that's what is out there. The Power Consumption for any workload should be simliar at simliar CPU utilizaiton numbers.
The IO options for the blades should not be a limiting factor. If you use the pass-thru modules, then the network connections just go straight through to your network switch, just as the 1950s would. There are no shared channels with the pass-thrus. The throughput should be the same as 1950.
The max internal disk size for blades is currently 300GB per disk - so if you really need 1TB disks - the blades won't work.
I think the question is why are you seeing an IO wait? Is it becuase the 2 local SATA disks are overloaded? Or is it network related? Often with disks, you need more of them to keep up with IO demands, even though you have enough free disk space. I think there are a few options for more disks including using 2950s for more internal disks or an external iSCSI array.
Prasana - As the Dell Blade Product Manager I thought I would add a little to Todd's comments.
The throughput on the M600/M605 will be the same as a 1950. You should see very similar performance. Part of our goal in the design was to make sure you didn't have to choose functionality or performance when choosing the best form factor for you. So no matter which form factor you choose 1U or blade, you'll get similar functionality and performance.
We'll never say that blades are for everyone but they do have some unique advantages. I tried to write this paper to show the advantages of blades: http://www.dell.com/downloads/global/products/pedge/en/pe_m1000e_next_gen.pdf
Todd: The io_wait is coming from the local disk, and your point is valid on the need for spindles. Here's what I have tired :
1. swapped out SAS for SATA and the io_wait went down to nill. - in percentages, io_wait went down from 30 to 7 [max] And I'd like to throw into this mix, 2. PE2950 6 x SATA spindles - I haven't had a spare yet to throw that workload on the PE2950