Moderator

 • 

9.7K Posts

July 14th, 2015 05:00

Harshal Patel,

The R720 will support up to four full-length single-wide GPU or two full-length double-wide GPU. The GPUs are installed on the PCIe x16 Gen2 interfaces available on Riser2 and the GPU-Optional Riser3. For installing two internal GPUs in the system, GPU-Optional Riser3 has to be present. You will also need to take the following restrictions into consideration as well;

  • Requires 2 CPUs
  • CPU must be 95W or less
  • Max of two double wide GPU
  • Max of four single wide GPU
  • All GPUs must be same type / model
  • GPU requires redundant 1100W PSU and GPU enablement kit
  • Two double-wide GPU requires optional riser 3
  • Four single wide GPU cannot have optional riser 3
  • TBU not supported 

As you see the redundant 1100W PSU's are required because of the high power demand of the GPUs.

Lastly, to be able to monitor power consumption on the server you would need to use OpenManage Server Administrator, depending on the OS installed. If you let me know I can get you the proper links. This link here also addresses power consumption monitoring on Poweredge servers as well.

Let me know if this helps answer your questions.

1 Rookie

 • 

5 Posts

July 14th, 2015 09:00

Hi Chris,

Thanks for the information!
I have CentOS 7 installed on R720. Can you please send me the correct link to download OpenManage Server Administrator for it and user manual? 

Thank you,

Harshal Patel
HPC Systems Engineer
Signalogic Inc.

1 Rookie

 • 

5 Posts

July 14th, 2015 10:00

Hi Chris,

Also what if the GPU is 150W or less, then 1100W supplies are still recommended ?

Thank you!

Harshal Patel
HPC Systems Engineer
Signalogic Inc.

Moderator

 • 

9.7K Posts

July 14th, 2015 11:00

You will find the download here. To install you would  login as root to the system and cd to the directory with the unzipped contents. Then run ./setup.sh and follow the guided installation steps.

You can find all the OpenManage Server Admin guides here.

To answer your other question. The 1100w is required, regardless of the voltage drawn by the GPU.

Hope this helps.

1 Rookie

 • 

5 Posts

July 14th, 2015 14:00

Hi Chris,

Thank you very much for such quick response! It sure does help.

I have put in full length GPU board, I am measuring temperature of GPU and it is getting very close to it's temperature limits at back end of the board while it is being used.

Is there any way to manually pull up the fan speed? e.g. Running fans at max speed?  

NOTE: 
I do not have installed low profile heat sinks.

Thank you,

Harshal Pael
HPC Systems Engineer
Signalogic Inc.

Moderator

 • 

9.7K Posts

July 15th, 2015 07:00

I would verify if the BIOS and iDrac are current and up to date. I ask this due to the fans being controlled by the iDrac, so updating them to current will ensure it is adjusting the fans accordingly.

You can find the current versions of the BIOS and iDrac here.

Let me know how it goes.

1 Rookie

 • 

5 Posts

July 16th, 2015 08:00

Hi Chris,

Thank you very much for the information. I found that fan speed can be controlled from iDRAC. So now I have increased fan speed to max and now it looks like temperature is now under control and within "safe limit".

Thank you once again. Your guidance helped a lot.

Thanks!
Harshal Patel
HPC Systems Engineer
Signalogic Inc.

1 Rookie

 • 

2 Posts

April 4th, 2019 09:00

I picked up a 2nd hand Dell T720, upgraded the CPU's, added 96GB 1333 ECC RAM, upgraded to a PERC 700 card w/ 1GB cache card, and added a 2nd 1100W PSU.

I don't see any additional ports/plugs/sockets to add power leads from, and from the motherboard specs, I'm only seeing a max of 35w per PCIe slot. How would I go about adding a Server/Workstation GPU to my server?

1 Message

June 24th, 2019 13:00

Chris,

 

I found this useful thread, and hoping you are still available to answer questions regarding the R720 GPU support. I really need your help.

 

We purchased one nVidia RTX 6000 GCard, putting it into our R720 server, the server has dual 1100w power supply and three riser cards, we used one of the card and it seemed fine at beginning. then after a couple of days, the front warning light will turn amber and reporting slot 6 is having fatal error. rebooting the system will get the message gone but it will keep coming back.

I have noticed that our dual processors are using 115W  model, do you think this is what cause the power issue for GPU? if so, what is your suggestion to fix it?    

Intel Xeon E5-2695v2 2.4GHz 3.0M Cache, 8.0GT/s QPI, Turbo HT 12C, 115W Max Mem 1866MHz

Thank you so much!
Helen

2 Posts

October 7th, 2019 15:00

Hey Helen,

did you manage to fix this problem?

I got a dual E5-2697v2 setup and am trying to run an RTX 2080 gpu card on it... gpu card only draws about 200W, but I'm getting the same orange slot error as you...
You say slot 6 though, have you tried slot 4? that is the only x16 pcie, slot 6 is a 8x.


I even tried splitting the power to the card from 2 different riser card power outlets, didn't work...
Dell user manual says CPU can be up to 115W, but support informed me that it used to be 95W...
I have someone in the datacenter I'm in that has 95W cpu's, I'll test those this week and let you know

1 Message

June 12th, 2020 04:00

Any luck on getting them operational ?

Tried here the same, same problems...

2 Posts

June 14th, 2020 23:00

Couldn't get it to work, bought a HP Proliant gen8 which also had some issues but I got that to work

No Events found!

Top