Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Article Number: 000138566

Memory Information for 11th Generation AMD Processor Servers

Article Content


Article Summary: This article provides information on the memory types and error correction supported in Dell 11th Generation PowerEdge servers with AMD Processors.


The 11th generation of Dell Poweredge servers use DDR3 memory.  The dual AMD processor servers (Poweredge R415 and R515) employ the AMD 4100 processor with the SR5670 chipset, these chips determine the speed and configuration of DDR3 DIMMs that can be utilized.  In 2011 the servers were updated to the AMD 4200 processors and in early 2013 to the AMD 4300 processors.  They are designated with a Roman numeral II after their names.  The PowereEdge R415II and R515II added new types of DDR3 memory Load Reduced DIMMs (LRDIMM) and support for DDR3 1600MHz DIMMs.
The Poweredge R715, R815, and M915 use DDR3 memory with the AMD 6100 processor and SR5670 chipset.  These servers can be run in dual and quad processor configurations (R815 and M915) and hold more memory.  They were updated to the AMD 6200 processors in 2011 and in early 2013 to AMD 6300 processors.  Like the 4200/4300 series they can utilize LRDIMMs and added support for DDR3 1600MHz DIMMs. The motherboard G34 processor sockets are compatible with the 6200 and 6300 processors so the PowerEdge R715, R815 and M915 can be upgraded as long as the bios is updated but will not support all the features of newest memory types (LRDIMMs and 1600MHz RDIMMs), this would require a motherboard replacement. 
The DDR3 memory interface consists of two channels to each CPU socket on the R415 and R515 and four channels with the R715, R815, and M915.  
Either Registered DIMMs (RDIMM), Unbuffered DIMMs (UDIMM), or LRDIMMs may be installed but cannot be mixed.  
Only single and dual ranked UDIMMs are supported and only two UDIMMs per memory channel.
The systems support single, dual, or quad rank DIMMs with only two quad rank DIMMs per channel for RDIMMs or three quad rank LRDIMMs.
The interface uses 2GB, 4GB, 8GB, and 16GB RDIMMs or 1GB and 2GB UDIMMs.  32GB UDIMMS and RDIMMs were added in AMD 4200 and 6200 releases.  For higher memory capacity and speed quad rank LRDIMMs of 16GB and 32GB are supported and run at 1.5V.

Note: The five AMD servers have many RAS (Reliability, Availability, and Serviceability) features to detect and correct memory errors. These use standard ECC technologies supported by the memory controllers on the processors.


Registered DDR3 Memory

Registered memory has a register onboard the DIMM which buffers the control and address lines only.  The data is not buffered here and ECC technologies are used to check data integrity.  The register chip or Advanced Memory Buffer (AMB) on the DIMM reduces electrical loading on the processor's memory controllers which only see the register and do not have to address memory chips directly.  This means so you can have more ranks per channel for RDIMMs up to eight total.  With DDR3 architecture you can use four dual ranked RDIMMs or two quad ranked RDIMMs per channel.  None of the AMD servers exceed this rank limit however with quad rank RDIMMs the maximum speed will be 1066MHz.  
Servers using memory interleaving have better performance with a higher number of RDIMMs installed and also have more options for failover, redundancy, and data recovery when data corruption occurs.  Interleaving spreads data across banks of memory which with contiguous reads and writes of memory addresses means multiple DRAMs are used and can greatly improve performance.  Interleaving can occur within each channel (rank) or between memory controllers (channel) or between processors (node) and works best on the 6x00 processor servers with complete and symmetrically populated memory.  NUMA (Non Uniform Memory Access) will use bank and channel interleaving but enabling Node Interleaving will disable NUMA.  Most software is NUMA aware and needs to address all the memory (whether local or thru the HyperTransport interconnect) instead of being broken into separate regions normally used in UMA or traditional SMP configurations.  Node Interleaving is Disabled by default in the bios.

Unbuffered DDR3 Memory

Unbuffered memory has no buffer built into the DIMMs and so the memory controller has a direct connection with the memory and accesses each memory chip individually in a parallel fashion.  This puts much electrical loading on the memory channel so to keep up the quality of the signal fewer memory chips can be used   Only two UDIMMs per memory channel can be installed and system may not POST with more installed.  UDIMMs draw less power and will be slightly faster than RDIMMs in single DIMM per channel configuration. Dell systems ship with ECC UDIMMs.

Low Voltage DIMMs

Low Voltage DIMMs can operate at a lower operating voltage (1.35V) with possible power savings up to 15%.  The 4-core 4100 series processors will run LVDIMMs at standard voltage (1.5V).  There are some limitations imposed by processor and memory configurations, systems with two DIMMs per channel (DPC) will run at a lower speed if 1.35V is set in the system BIOS.
Supported DDR3L are 2 GB x8, 4 GB x8, 8 GB x8,  16 GB x4 and 32 GB x4 DRAM RDIMMs.
The AMD 6x00 systems have four memory controllers on each processor and can run at 1600 megatransfers per second (MT/s) on DDR3 architecture so 1600MHz single and dual rank memory can run at their rated speed but drops to 1066Mhz for quad rank RDIMMs DDR3L RDIMMS.  A fully populated system will drop speeds. 

The AMD 4x00 systems only have two DDR3 memory memory channels and are limited in their speed.  Systems with 1.35V DDR3L operate at these speeds:

  • 1600 Mhz DDR3L RDIMMs operate at a maximum of 1333MHz at 1.35V and 1066MHz with 2 DPC.
  • 1333 MHz DDR3L RDIMMs operate at the rated speed for 1.35V but at a maximum of 1066MHz at 1.5V.
  • 32 GB x4, 4 Gb DRAM DDR3L RDIMMs are rated for 1333Mhz but operate at a maximum of 1066MHz and at 800MHz with 2 DPC.

Systems with 1.35 V DDR3L memory modules operate at 1.5 V if any of the following conditions exists:

  • AMD 4100 series 4-core processors
  • A combination of standard and low voltage memory modules are installed
  • Bios is set to 1.5V


NOTE: PowerEdge M915 only supports LVDIMMs for power reasons. UDIMMs and LRDIMMs are not supported.


Load Reduced DIMMs

LRDIMMs have buffer chips for data as well as control and address lines.  Only a single load per DIMM is presented to the memory controller unlike Registered memory. This reduces the electrical loading on the memory controller and allows rank multiplication and increases memory capacity and speed. The last two generations of AMD processors and chipsets allow three quad ranked DIMMs on a memory channel for the first time.  This gives the servers a larger memory footprint than before with no loss of speed but draws more power and adds some latency to the memory bus.  Registered memory takes a serial memory signal and using a redriver multiplies this signal which connects serially to each DRAM but there are limitations to increasing the speed of this signal to RDIMMs.  LRDIMMs use an Isolation Memory Buffer (iMB) chip which takes the serial input and changes this to a parallel signal which fans out to each DRAM on the DIMM.    
LRDIMMs allow Rank Multiplication.  DDR3 memory architecture only allows eight ranks per channel so adding three quad ranked DIMMs would be the equivalent of 12 ranks to address.  The iMB chip handles this so memory controller and bios simply sees each physical quad rank DIMM as a logical dual rank DIMM.  The LRDIMM is presented as a single load so quad rank DIMMs no longer have to drop speed when more than one is added to a channel to maintain signal integrity.  So unlike RDIMMs the LRDIMMs should run at their rated speed as long as a slower DIMM is not mixed in.

Note: Even though they buffer the memory signals data, address, and control lines the Fully Buffered DIMM (FBDIMM) operates differently than RDIMMs and LRDIMMs. FBDIMMs are a DDR2 technology and are not supported and will not fit in these servers.


Memory RAS Features

All the memory shipped with the AMD servers use standard Error Correction Code (ECC) in which the memory controller detects and tries to correct Single Bit Errors (SBE) as well as detect multibit errors.  The chipset also supports Enhanced ECC which tries to correct for control line and addressing errors and Single Device Data Correction (SDDC).  Chipkill ECC (sometimes called Single Device Data Correction or SDDC) uses ECC checking on two ranks (128-bit data, 16 parity) instead of one rank (64/8) so allows recovery from some multibit errors. The SDDC algorithm can detect and correct multibit errors on x4 DRAMs and can detect x8 DRAM errors, so these systems can recover from failure of a single 4 bit DRAM chip and detect 8 bit internal data errors on two DRAM chips.  If Interleaving is enabled Enhanced ECC and SDDC with x8 DRAMs is not allowed.   

Note: Memory Mirroring is not supported on the AMD servers but Memory Sparing is. The system allocates a Rank or DIMM in each channel as a spare and if an error is detected moves data to it from the failed DIMM in the channel. All memory slots have to be populated to use Memory Sparing and one eighth of the memory is not accessible by the bios or operating system.



The server memory interface supports memory scrubbing (sequential or redirection) and patrol scrubbing for single-bit correction and multi-bit error detection.  The idea of scrubbing is to stop an accumulation of soft errors especially multibit errors that can cause data corruption and BSODs.  Sequential scrubbing scans the memory (cache actually) and if a correctable error is found fixes the error, that is the just the data going to the CPU.  With Redirection scrubbing if a correctable error is detected it redirects the scrubber to the location of the error in physical memory and fixes it.  The scrubber goes back to sequential scrubbing at the original location in memory.


Note: Patrol scrubbing scans the physical memory at idle times for data errors and if a correctable SBE is detected fixes the bad bit and if that fails logs the error. Patrol Scrubbing is done periodically on all populated memory locations.


Article Properties

Affected Product

Servers, PowerEdge M915, PowerEdge R415, PowerEdge R515, PowerEdge R715, PowerEdge R815

Last Published Date

21 Feb 2021



Article Type


Rate This Article

Easy to Understand
Was this article helpful?

0/3000 characters