2 Intern

 • 

2.4K Posts

January 11th, 2011 22:00

Thats weird because mine hit 93 all the time and i never shutdown or have any issue with them. It sounds like you have ether a PSU issue such as  itswired wrong or a bad GPU.

 

8 Wizard

 • 

17.4K Posts

January 11th, 2011 22:00

Try running 2 different (colored) video card power cables to the card and see if that helps (might be running out of power).

2 Posts

January 11th, 2011 22:00

I'm using a coolermaster silent pro 1000w psu. Had same problem with original 875w psu as well so i dont think its power. If i run fans with manual control at 75% or higher it makes it thought test fine, other than the errors it has. with having unstable clocks at stock settings there's no way it can overclock, exspecially with a 93c temp threshold.

8 Wizard

 • 

17.4K Posts

January 11th, 2011 23:00

Single rail I see.

Well, I thought if the machine runs out of video card power, it shuts down (turns off). If it hits TJMax, it just throttles back (and stays on). But with 2 power supplied doing the same thing ... well, it's likely something else.

Be sure system is not OC and everything else is getting plenty of power. Too bad you don't have another nVidia 4xx card you can drop in there and push hard.

34 Posts

January 12th, 2011 05:00

Tesla / Morblore.

I'm experiencing the same problem as Bryce however my rig (2 months old) config is similar to morblore (except the 120mm fans and 1 less Blu-ray but with 3 x Alienware 23” monitors). Running certain games (Crysis) pushes one of the GTX-480 cards to 91/92C mark then black screen of death. I have noticed that the top card in the Area-51 chassis is hotter by 9C compared to the card below and it’s been assumed because of the heat generated by GPU below (using MSI Afterburner to confirm).  Note I have not changed any Manufacturing components or even opened the chassis at this stage.

Is this normal?

Should the PCI fan % be increased (Auto shows 1%)?

Is this GPU bad?

Your thoughts appreciated…

34 Posts

January 12th, 2011 07:00

I checked this link and it appears the power cables to both GTX-480s are also in question (1 power cable per GPU). What's the correct way Dell - One for DELL-Chris M? 

http://en.community.dell.com/owners-club/alienware/f/3746/t/19360717.aspx

Community Manager

 • 

56.9K Posts

January 12th, 2011 08:00

It sounds like this is correct based on the fact that it fixed his issue.

The top video card will be powered by left P24 - yellow/black cable and right P19 - blue/yellow/black cable

The bottom video card will be powered by left P21 - black/white cable and right P18 - blue/yellow/black cable

8 Wizard

 • 

17.4K Posts

January 12th, 2011 11:00

It sounds like this is correct based on the fact that it fixed his issue.

The top video card will be powered by left P24 - yellow/black cable and right P19 - blue/yellow/black cable

 

The bottom video card will be powered by left P21 - black/white cable and right P18 - blue/yellow/black cable

 

 

Yes, try that because it worked for that user.

However, no one really knows because Dell won't release the pinouts and 12v rail assignments on the various video card power cables. It's all trial and error.

Basically, find all the video power cables tucked away in the case. Connect as many different ones as possible to the 2 connectors on both cards. If the end of a cable has an 8-pin and a 6-pin, only use one or the other. This should engage as many different rails as possible (instead of them laying in the case un-used).

8 Wizard

 • 

17.4K Posts

January 12th, 2011 11:00

Tesla / Morblore.

 

I'm experiencing the same problem as Bryce however my rig (2 months old) config is similar to morblore (except the 120mm fans and 1 less Blu-ray but with 3 x Alienware 23” monitors). Running certain games (Crysis) pushes one of the GTX-480 cards to 91/92C mark then black screen of death. I have noticed that the top card in the Area-51 chassis is hotter by 9C compared to the card below and it’s been assumed because of the heat generated by GPU below (using MSI Afterburner to confirm).  Note I have not changed any Manufacturing components or even opened the chassis at this stage.

 

Is this normal?

 

Should the PCI fan % be increased (Auto shows 1%)?

 

Is this GPU bad?

 

Your thoughts appreciated…

 

Well, for starters, open it and make sure all the fans are even turning.

I never had much luck with Command Center (Thermal Controller) HDD or PCI-E Fan on Auto Setting. Set to Manual, then Curve, the Save/Apply.

As for the Video cards themselves, leaving the fans on Auto in the normal nVidia Driver control panel should have worked. Using MSI Afterburner might have changed this default operation or made it worse.

34 Posts

January 13th, 2011 17:00

Hi Telsa/ DELL-Chris M

Thanks for your feedback most appreciated guys.

To confirm my out-of-the-factory cabling was slightly different. The P21 was NOT originally plugged into the GPUs in fact I ended up using it as the third cable where P19/P18 is positioned in the UR link above. P19/P18 where in fact in use.

IT WORKS !!!

The testing of certain games and benchmarks confirmed no power outage, obviously the Watts are available for the GPUs now. The temperature at worst hovers around the 91C mark and basically stays there with no black outs. I also use MSI Afterbuner to manage the GPU fans. The NVidia control panel is not great for this.

As a side note: I’m on 263.11 drivers – latest DELL certified. I realise that NVidia have a beta 266.35 version out. But I’ll assess this once out of beta.

I will take on your suggestion Telsa on the PCI fan. I guess it’s one of those “trial and error” approaches to what works best depending on what application or game is running. Liquid Nitrogen system would be good .

DELL-Chris M I believe this is too important not to raise and to sticky somewhere. I would imagine that disk drive failures or corruption of data could easily occur with these black outs. I had a corruption to the Crysis game during a black out and needed to remove the //documents/Crysis folder. Fortunately it was game data. 

Cheers guys.

34 Posts

January 13th, 2011 22:00

Hi Telsa

As a side note: I found that keeping the fans in pace with the temp keeps the GPU temp down between 2 - 4 degrees C when stressing the GPUs. The MSI is better designed (GUI) to do this compared to the NVIDA control panel (personal opinion). Basically like the Command Centre - using the manual PCI setting you mentioned - I'm applying the same method with MSI for GPU fans.

In terms of Command Centre the GPU temp does rise (4C on the top GPU) with CC on Auto when GPU is not stressed. Worth another sticky on a recommended config setting.

Thanks for the advice.

No Events found!

Top