Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

251357

September 16th, 2010 09:00

Adding a new video card to PE T610

I've bought a new nVidia Quadro NVS-290 (PCI-E x1) video card.

It can successfully fit in any of the x8 slots.
I've tried it in the n°5 as well as in n°1 slots.
The card is well-seen by Windows 2008 R2 (driver installed and no error in the device manager).

But when connecting a monitor on any of the VGA cable (there are 2 on the NVS 290), either before boot or after, I don't get a signal and the monitor always turns into powersaving mode...

I've seen an option in the Unified Server Configurator where I can disable the on-board Matrox W200 video card.
Is it the solution ?

But I'm a bit afraid to do that, because if the nVidia still doesn't work, I won't have the onboard-card any more either (so, how to re-enable blindly the default card ?) !

What's your opinion ?

Thanks in advance.

2.5K Posts

March 14th, 2012 01:00

Having dealt with this issue on a T410 server, I would like to point out the following; the T410 and T610 is not designed to use add on video cards, and they the systems are not designed to disapate more than 15 watts of heat for non-dell cards (25 watts for Dell storage cards). It is all in the manuals.

2 Intern

 • 

548 Posts

March 14th, 2012 09:00

I am aware of the heat limitation Dell has placed on the PCIe slots which they have (poorly) documented in the Hardware Owners Manual and (more clearly) in the Technical Guide.

In the case of the T610, the machine has 5 PCIe slots, 2 of which are x8 and 3 of which are  x4 slots. Table 3-1 of the HOM states the priority order of expansion cards, how many can be installed and what power they can consume. The table footnote states that a maximum of 2 cards whose power exceeds 15W can be installed. Above the table itself there is the statement that no more than 2 of the five slots can can have a power consumption greater than 15W (25W max). The HOM needs to be corrected as its poorly written and easily misinterpreted. The TG states in section 11 that the system supports 25W max from the first two slots but only 15W max from the remaining 3 slots due to thermal limitations not power limitations.

As for the PCIe specification, this defines 2 classes of slots, normal and graphics (though can't remember the correct terms) which can feed 25W max and 75W max respectively. So any limitation Dell has placed on the system is an issue of heat dissipation and not one of power consumption per sei (just as stated in the TG).

So, it then seems that we can have a total of 2x25W + 3x15W = 95W of heat produced before the machine will have any issues with non force cooled server cards (as they all are). If you put fan cooled non server cards in the system, then you will also be able to get 25W for each slot => 125W in total (but it's for you to ensure the heat will be effectively removed from the case, Dell doesn't guarantee it)

Why any power differential would be documented at 25W for Dell Storage and 15W for non Dell storage cards is also a mystery. Has Dell coded something in the BIOS to differentiate between Dell and non Dell cards and takes action accordingly - I think (hope) not. It's just lame documentation in the HOM that has the (un?)intended consequence of pushing purchase of their 'certified' storage controller cards and HDD.

Finally, the PCIe spec defines the initial power up process for cards. If you have a graphics card with no PEG connectors inserted, this initial power draw limit is 25W from the slot. Then once system reset is released, the system provides the slot_power_limit to the card and if higher than the initial power draw, the card can then draw to this new power limit (normally 75W for a graphics slot). In the case of non graphics slots, the slot_power_limit is always going to be 25W. So as long as you use low power cards, you will be able to very safely have your directx11 and enjoy it.

Where the PCIe spec is not clear is how a graphics card will apportion operational power draw between the PCIe slots and PEG connectors should the slot_power_limit remain at 25W (rather than the 75W of a normal graphics slot). Since there is no guidance here, different high power cards may not behave as expected and have issues under heavy use when they can't make up for the shortage (75W required - 25W available = 50W shortage) from the PEG connectors.And the graphics card manufacturers will not divulge any information.

So if you want better graphics and Dell wont help, take a hacksaw and modify a low power graphics card and feel assured that all should be OK afteer doing your homework.

If you had specific issues with your T410, then do tell.

2.5K Posts

March 14th, 2012 13:00

Very interesting, but besides the point.  The technical specifications clearly says you can't do it.  Why, I have no idea !  Clearly, if you have to take a hacksaw to to a plugin card to make it fix, the card was not designed for the slot.  

2 Intern

 • 

548 Posts

March 14th, 2012 20:00

ALoubert, the original poster had an issue with getting his nVidia Quadro NVS-290 to work so we helped to resolve his problem. Foveator also had an issue and he seemed delighted with the resultant video on his 20" monitor after adding a graphics card. Others weren't so lucky and had more difficulty since Dell, by choice, did not provide a bios which allows the owner to disable on board video. In these instances, some got their video to work, some not.

These above issues had nothing to do with mechanical fit.

Like others, i could have used a x1 PCIe graphics card in the T610 in which case it would fit in any of the x8 or x4 slots. But i found reuse of an existing x16 card i had lying around to be interesting project and at $0 cost. As some may also be interested in how to modify x16 cards, i felt sharing would be in good spirit. Many would not contemplate such a mod due to their lack of understanding of the technology or the loss of (card) warranty that results. If modding is not your thing, that's fine - don't do it, use a x1 PCIe graphics card.

But the issue with heat and power (which i have mentioned it in other posts) is a very minor one and easily solved. People just need to be aware of the technical and warranty issues before they start such a venture. Trying to discourage them from using discreet graphics cards because of a misleading statement in Dell documents adds nothing to the discussion. As for Dell documents being clear, maybe you should read the appropriate parts of the HOM and TG again. Hardly the be all and end all of clarity.

And so you understand the heat dissipation limits, server cards do not have fans on them for reliability reasons. These cards rely on the system fans for their cooling airflow. In servers, these system fans can be optioned as redundant fans. Obviously Dell has decided that for the T610, slots 3, 4 & 5 can't get as much air flow as is need for the PCIe spec slot limit of 25W, so they downgraded the slot power limit to 15W per slot.

The issue about Dell and non Dell cards is just rubbish and should be given the contempt it deserves.

If everyone listened to what Dell recommends, we would all be forced to use Dell certified storage cards and certified drives and pay a huge premium for it - or move to another vendor that didn't have such lock-in. Luckily many many owners were disgusted at such limitations and to Dells credit they listened and released new firmware for some Dell cards allowing non certified drives to be used. Unfortunately they didn't fix the issue for all Dell storage cards.

I don't know about you, but i despise forced lock in and such designed limitations.

The real problem is Dells designed limitation (x8 slot and missing bios options) and seemingly indifferent attitude to their customers. If they really cared, the would have resolved the issue by releasing updated system bios that allowed all these old and crippled servers to be repurposed via an expansion graphics card update (and sold some x1 or newly designed x8 PCIe cards along the way - a lost goodwill opportunity).

If people want to blindly follow what's written, that's their choice but i would hope people want to see for themselves. And it's information that can help in the latter.

9 Legend

 • 

16.3K Posts

March 14th, 2012 21:00

sky ... just one note on what you said briefly about the certified drives ... this is NOT specific to Dell.  This is standard practice on high-end storage units (SAN's, etc.), from nearly every major player in the industry.  The new integrated controller for servers began to follow that same path and was not well-received at all.  As storage becomes more and more specialized, you are not likely to see this situation improve (from the standpoint of one who is looking for hardware openness).

2 Intern

 • 

548 Posts

March 14th, 2012 21:00

theflash, i know all vendors optimize their controllers and drives so it's not a Dell specific issue, rather an industry wide failure to standardize on an auto optimization protocol. This will likely never improve as there are too many incumbents that believe a closed shop is the best way forward (while they look at Apple with envy).

As i said, to Dells credit they did allow non certified drives on the PERC6/i, which is a good thing for everybody, especially those that don't need to fully optimize their disk subsystem due to their different view on a cost/performance balancing act. Now Dell only needs to update the PERC6/i firmware to allow Dell certified & non-certified SSD's to be used (if not already usable) :emotion-1:

So, i can only voice my very strong dislike for any closed systems while hoping for hardware openness (though i fear UEFI, signed BIOS & Bootloaders along with forced TPM is the biggest risk to truly open hardware).

Being at a Dell forum, our discussions obviously tend to revolve around Dell even though the same issues may be industry wide so i'm not trying to be anti-Dell. As a matter of fact, i like Dell and hope to see them improve especially in areas where they have some shortcomings.

9 Legend

 • 

16.3K Posts

March 14th, 2012 22:00

Ok ... just wanted to make a note of that for those who don't regularly participate in other relevant discussions :)

Also, it was the new PERC H700 that initially blocked non-certified drives, not the PERC 6/i :)

1 Message

July 24th, 2012 17:00

Considering hyperV and needing the ability to assign video cards to virtual machines for performance apps, needing a video card is important and it seems that we have no real options on a T610 (much less my other servers like a r710 and r910 server). I want a video card to run CAD on virtual machines shared.

4 Operator

 • 

9.3K Posts

July 25th, 2012 11:00

I'd suggest to look into the newer Dell 12th generation servers like the T620. These systems offer support for PCIe 16x videocards.

Note that due to Intel's processor and chipset design (for the Xeon E5 series), with these new systems some PCIe slots may only work if 2 CPUs are in the server.

The owner's manual (available from support.dell.com/.../index.htm) has a section on installing a GPU card.

2.5K Posts

July 25th, 2012 18:00

Being an owner of both a Workstation PWS T7400 and a PE T410 sever, there is no way I would try to run graphics intensive applications on the T410, it was not designed for it.  On the other hand the Power Workstation can support most high performance graphics cards at least through NVidia Quadro 6000.  

1 Message

June 19th, 2014 13:00

Skylarking,

Were you able (or did you try) to use this with RemoteFX?

Thanx,

RJD

2 Intern

 • 

548 Posts

June 24th, 2014 01:00

Daniluk, i haven't used remotefx.

From the very little i have read, it requires a server OS (though not all server os's provide the feature) and a client OS (though not all client OS's provide this feature). It also needs a graphics card that can provide DX11 to accelerate the graphics encoding while your server requires Second Level Address Translation capabilities which must be enabled in BIOS.

There are others requirements but the remotefx wiki explains a little as does this technet article and msdn blog. Google will no doubt find more for you to read.

2 Posts

February 4th, 2018 12:00

Hi all,

I know that this is a very old thread, but just wanted to let you know that I was able to get a GeForce GT 710 1GB (PCIE 1x) running on my PE T610 and now I can play Call Of Duty MW at 70FPS wich GPU overclock (and the metal plates of the card slots in the back panel off, hehe).

38 Posts

February 13th, 2018 09:00

Thank you for the update.  It's always good to know which cards will work in servers with success.

1 Message

June 17th, 2018 14:00

just successfully added a Nvidia G710 - 2 GIG card to my PowerEdge T610.  thank you for the tips on how to do so.  Makes a great workstation that can now drive a larger monitor with decent video performance

 

 

No Events found!

Top