So, out of curiosity, if you knew you were going to be doing this prior to ordering the server, why did you order an xd and not just order the one on which it is supported?
Also, tech support (regardless of the level) is not always privy to the design details, and may - or may not even - have the same level of tech experience as any of us, and so are left to guess, 'well, I don't see why not', if engineering did not make such details readily available to them (which they often do not do).
Another thought ... take the 2950, for example ... the 2950 I and the 2950 III use the same BIOS code, but only the 2950 III will support 5400-series processors and 8GB DIMM's, so it is something hard-wired in the board, and not even a function of the BIOS. We obviously don't know if this is the case, but another possibility as to why one does and one does not support GPU's.
Agreed, it's quite possible that the tech guys involved did not know something about a hard-coded restriction. However, there were many debates with various people so I was led to believe that there was nothing actually preventing it. I'm not saying the information I was given was 100% accurate, just that's what I was told in a series of conversations leading up to my purchase.
The primary reason for going with the R720xd was disk space....there's quite a difference between 16 2.5" drives and 25 2.5" drives so I just opted for the higher disc capacity. Looks-wise, I also prefer the look of the xd version with a full set of drive bays at the front.
My primary reason for purchase was to run a number of VMs on the physical server, and also for backup storage. At the moment I have 8 SSD drives running the OS and VM disks (RAID 1 for OS and one RAID1/one RAID 10 for VMs) and then two RAID sets of 8 drives each, one RAID 50 and one RAID 6 for both backup and low-performance storage. Then finally I have a two-drive RAID 0 in the flex bays for general temporary stuff where I need mid-performance. Having a GPU installed to use for RemoteFX in the VMs was a nice to have if I could fit it in to the 2U server (no other server can fit GPUs into a 2U rackspace so the Dell was a good option). The other consideration is that with the 'supported' R720 GPU solution, Dell only support up to 115W TDP processors, where I have E5-2690's installed!!!
I think it's highly possible I am asking too much with a fully loaded R720xd plus E5-2690 processors (x2) plus a Tesla GPU, all in a 2U....however, I am slightly annoyed by the fact that it all works fine except when running Windows Server 2012 with the Tesla card, and that's where I get blue screen crashes. I think I would have been happier if the card had just blown up when I plugged it in, but to have got this far and then get occasional crashes is midly annoying :-)
Worst case scenario for me is that it doesn't all work together and I end up with my standard R720xd fully loaded but with no GPU. In this configuration, Windows Server 2012 can use RemoteFX with a software-emulated GPU card; whilst not ideal, I am not running a tier 1 datacentre and this is really a testing rig for running a load of test VM machines so my E5-2690 processors should be able to handle the load.
You never know, I could always order an R720 with a single GPU and then move my GPU over so I have two GPUs in a fully supported R720 machine, but then I've used 4U of rackspace and ideally I was trying to avoid that.
Sorry, I wasn't 100% clear what you were meaning until now.
If you are doing this for GPGPU usage then you won't connect to it as a video card, you will continue using the embedded video. Which you will use the nVidias for.
If your goal is to use it as a video card, and not GPGPU then you would need to try with the ATI 7800p.
my goal is actually to use it as a RemoteFX card for my VMs, hence why I am using the M2070Q and not M2075 or M2090 which are geared more for GPGPU.
However, since I am getting blue screen crashes I have attempted to switch it to GPGPU mode (nVidia TCC mode) just to see if it resolves the issue, but it does not and I get the same error.
I am not worried about actually connecting a physical monitor to the card, and therefore in my head I have no need to disable the onboard graphics.
The issue I am facing at the moment is that when running Windows Server 2012 with the nVidia Tesla drivers, I get blue screen crashes. nVidia have suggested this is due to a conflict between the MAtrox graphics and the Tesla card under Server 2012. However, my argument is that actually even if I was running a R720 and not an R720xd the onboard graphics would still need to be enabled on the server to plug a monitor in to. Therefore, as far as I can see there is an issue with either the nVidia driver, or the Dell BIOS. One is causing a conflict.
I believe that this conflict will also occur on the R720 when using Windows Server 2012 with nVidia driver 306.79 (certified for Windows Server 2012) but unfortunately I do not have a R720 to try it on as I only have an R720xd. I am told that Dell do not have any known issues in this area, but since both R720 and R720xd use the same Matrox embedded graphics, it seems possible that the same issue could occur for both, if the true cause is indeed a conflict between the two.
Of course it could also be a number of other things such as a BIOS issue with Windows Server 2012 in general, or some other device conflict with the new OS. However, I am sure that would have come up in Dell testing....
Can you let me know what you think? I am interested in your opinion on this. Assuming R720 does leave embedded graphics enabled even with Tesla cards installed, then there does seem to be an issue here somewhere.
The other thing of course is that, even when using a R720 with Tesla GPU, it would not be possible to disable the onboard graphics because you'd need them enabled to plug a monitor in (since Tesla doesn't have a monitor connection). Therefore, by all accounts the integrated Matrox graphics in a R720xd are the same as R720 and would still have to be enabled at all times in the R720 anyway, therefore they must be able to work together.
Interestingly, on the Dell website even under R720 there is only driver version 276.14 available for the Tesla, which nVidia say is not compatible with Windows Server 2012. On the nVidia site, we can download 306.79 which is supported in Server 2012, but not available officially verified by Dell.
So, my belief at the moment (hopeful, I know) is that there is actually a BIOS conflict between the drivers for Windows Server 2012 (WDDM 1.2, as opposed to WDDM 1.1 in Server 2008) and the DELL BIOS/firmware. I am guessing that due to the low number of customers using this configuration it just hasn't been highlighted yet. Therefore, I am hoping there will soon be a BIOS update made available to resolve this on the R720, and this will then potentially automatically resolve the problem on the R720xd as well.
As I say, this is hopefulness kicking in, but it seems perfectly reasonable at this point based on the testing I have done. It's just a shame I don't have a R720 to test my theory on, as if I could prove it didn't work on the R720 then I'd have a case to pass over to Dell...
Would you clarify your intentions, is it to use the GPU as a video card? As that isn't supported on the R720, since the GPU can only be used for additional processing power. So if you are trying to disable the onboard vide, it isnt supported.
theflash1932
9 Legend
•
16.3K Posts
0
October 30th, 2012 16:00
So, out of curiosity, if you knew you were going to be doing this prior to ordering the server, why did you order an xd and not just order the one on which it is supported?
Also, tech support (regardless of the level) is not always privy to the design details, and may - or may not even - have the same level of tech experience as any of us, and so are left to guess, 'well, I don't see why not', if engineering did not make such details readily available to them (which they often do not do).
Another thought ... take the 2950, for example ... the 2950 I and the 2950 III use the same BIOS code, but only the 2950 III will support 5400-series processors and 8GB DIMM's, so it is something hard-wired in the board, and not even a function of the BIOS. We obviously don't know if this is the case, but another possibility as to why one does and one does not support GPU's.
DELL-Chris H
Moderator
•
9.7K Posts
0
October 30th, 2012 16:00
Rjtmerrett,
For the function of GPGPU the ATI Firepro V4900 will not be able to work. The best bet (Still isn't supported by Dell) is the nVidia M2070Q
nVidia M2075
nVidia M2090.
Now once a working card is installed the BIOS option to disable embedded is no longer grayed out.
The only ATI I see that would work is the 7800P, which doesn't support GPGPU, it only supports RemoteFX.
rjtmerrett
1 Rookie
•
14 Posts
0
October 30th, 2012 17:00
Agreed, it's quite possible that the tech guys involved did not know something about a hard-coded restriction. However, there were many debates with various people so I was led to believe that there was nothing actually preventing it. I'm not saying the information I was given was 100% accurate, just that's what I was told in a series of conversations leading up to my purchase.
The primary reason for going with the R720xd was disk space....there's quite a difference between 16 2.5" drives and 25 2.5" drives so I just opted for the higher disc capacity. Looks-wise, I also prefer the look of the xd version with a full set of drive bays at the front.
My primary reason for purchase was to run a number of VMs on the physical server, and also for backup storage. At the moment I have 8 SSD drives running the OS and VM disks (RAID 1 for OS and one RAID1/one RAID 10 for VMs) and then two RAID sets of 8 drives each, one RAID 50 and one RAID 6 for both backup and low-performance storage. Then finally I have a two-drive RAID 0 in the flex bays for general temporary stuff where I need mid-performance. Having a GPU installed to use for RemoteFX in the VMs was a nice to have if I could fit it in to the 2U server (no other server can fit GPUs into a 2U rackspace so the Dell was a good option). The other consideration is that with the 'supported' R720 GPU solution, Dell only support up to 115W TDP processors, where I have E5-2690's installed!!!
I think it's highly possible I am asking too much with a fully loaded R720xd plus E5-2690 processors (x2) plus a Tesla GPU, all in a 2U....however, I am slightly annoyed by the fact that it all works fine except when running Windows Server 2012 with the Tesla card, and that's where I get blue screen crashes. I think I would have been happier if the card had just blown up when I plugged it in, but to have got this far and then get occasional crashes is midly annoying :-)
Worst case scenario for me is that it doesn't all work together and I end up with my standard R720xd fully loaded but with no GPU. In this configuration, Windows Server 2012 can use RemoteFX with a software-emulated GPU card; whilst not ideal, I am not running a tier 1 datacentre and this is really a testing rig for running a load of test VM machines so my E5-2690 processors should be able to handle the load.
You never know, I could always order an R720 with a single GPU and then move my GPU over so I have two GPUs in a fully supported R720 machine, but then I've used 4U of rackspace and ideally I was trying to avoid that.
theflash1932
9 Legend
•
16.3K Posts
0
October 30th, 2012 17:00
Agreed ... the xd looks pretty mean :)
Good luck.
DELL-Chris H
Moderator
•
9.7K Posts
0
October 30th, 2012 17:00
Sorry, I wasn't 100% clear what you were meaning until now.
If you are doing this for GPGPU usage then you won't connect to it as a video card, you will continue using the embedded video. Which you will use the nVidias for.
If your goal is to use it as a video card, and not GPGPU then you would need to try with the ATI 7800p.
rjtmerrett
1 Rookie
•
14 Posts
0
October 30th, 2012 17:00
Hi Chris,
my goal is actually to use it as a RemoteFX card for my VMs, hence why I am using the M2070Q and not M2075 or M2090 which are geared more for GPGPU.
However, since I am getting blue screen crashes I have attempted to switch it to GPGPU mode (nVidia TCC mode) just to see if it resolves the issue, but it does not and I get the same error.
I am not worried about actually connecting a physical monitor to the card, and therefore in my head I have no need to disable the onboard graphics.
The issue I am facing at the moment is that when running Windows Server 2012 with the nVidia Tesla drivers, I get blue screen crashes. nVidia have suggested this is due to a conflict between the MAtrox graphics and the Tesla card under Server 2012. However, my argument is that actually even if I was running a R720 and not an R720xd the onboard graphics would still need to be enabled on the server to plug a monitor in to. Therefore, as far as I can see there is an issue with either the nVidia driver, or the Dell BIOS. One is causing a conflict.
I believe that this conflict will also occur on the R720 when using Windows Server 2012 with nVidia driver 306.79 (certified for Windows Server 2012) but unfortunately I do not have a R720 to try it on as I only have an R720xd. I am told that Dell do not have any known issues in this area, but since both R720 and R720xd use the same Matrox embedded graphics, it seems possible that the same issue could occur for both, if the true cause is indeed a conflict between the two.
Of course it could also be a number of other things such as a BIOS issue with Windows Server 2012 in general, or some other device conflict with the new OS. However, I am sure that would have come up in Dell testing....
Can you let me know what you think? I am interested in your opinion on this. Assuming R720 does leave embedded graphics enabled even with Tesla cards installed, then there does seem to be an issue here somewhere.
rjtmerrett
1 Rookie
•
14 Posts
0
October 30th, 2012 17:00
The other thing of course is that, even when using a R720 with Tesla GPU, it would not be possible to disable the onboard graphics because you'd need them enabled to plug a monitor in (since Tesla doesn't have a monitor connection). Therefore, by all accounts the integrated Matrox graphics in a R720xd are the same as R720 and would still have to be enabled at all times in the R720 anyway, therefore they must be able to work together.
Interestingly, on the Dell website even under R720 there is only driver version 276.14 available for the Tesla, which nVidia say is not compatible with Windows Server 2012. On the nVidia site, we can download 306.79 which is supported in Server 2012, but not available officially verified by Dell.
So, my belief at the moment (hopeful, I know) is that there is actually a BIOS conflict between the drivers for Windows Server 2012 (WDDM 1.2, as opposed to WDDM 1.1 in Server 2008) and the DELL BIOS/firmware. I am guessing that due to the low number of customers using this configuration it just hasn't been highlighted yet. Therefore, I am hoping there will soon be a BIOS update made available to resolve this on the R720, and this will then potentially automatically resolve the problem on the R720xd as well.
As I say, this is hopefulness kicking in, but it seems perfectly reasonable at this point based on the testing I have done. It's just a shame I don't have a R720 to test my theory on, as if I could prove it didn't work on the R720 then I'd have a case to pass over to Dell...
Ibrahim alameri
1 Message
0
March 13th, 2023 03:00
Hello dear,
How would I disable the port from bios to enable my GPU, PLEASE.
Thank you
DELL-Chris H
Moderator
•
9.7K Posts
0
March 13th, 2023 05:00
Ibrahim alameri,
Would you clarify your intentions, is it to use the GPU as a video card? As that isn't supported on the R720, since the GPU can only be used for additional processing power. So if you are trying to disable the onboard vide, it isnt supported.
Let me know.
atafm2
1 Rookie
•
89 Posts
1
March 13th, 2023 09:00
go to bios and then i think miscellaneous settings.
you will have to dig around a bit.
you are looking for Enable Integrated Video Controller.
You need to set that to Disabled