Start a Conversation

Unsolved

J

4 Operator

 • 

14K Posts

7246

August 24th, 2020 18:00

Understanding Thunderbolt docks, GPU bandwidth, and GPU interfaces

I wrote this up mostly for any fellow tech geeks, and also for the benefit of those who might want to push the boundaries of possible display setups and might find this information useful in achieving a workable setup, since it's possible that your desired display setup can be made to work if cabled a certain way, even if it won't work in most cabling combinations.

(Note: My testing was conducted on non-Dell hardware, but I'm posting here because I believe many of the concepts would be applicable to at least certain Dell hardware, specifically systems with discrete GPUs and Thunderbolt 3 ports.)

I have a system that uses the Intel GPU to run the built-in display and then allows the NVIDIA GPU to have direct control of all external display outputs.  This is somewhat rare even among systems that have NVIDIA GPUs.  On many systems, the Intel GPU still has control of the display outputs, with the NVIDIA GPU working only as a rendering device through NVIDIA Optimus. More on Optimus, the various hybrid GPU designs found in laptops, and their pros and cons in this post.  detail in this post. But for example some Precision 7000 Series models use Intel for the built-in panel and NVIDIA for external displays if the  "Graphics special mode" BIOS option is enabled.  The 15-16" MacBook Pros work this way, and the XPS 17 9700 has a BIOS option that allows the NVIDIA GPU to take direct control of ALL outputs, including the built-in display.

But with my own system, I knew that the NVIDIA GPU was at least in theory capable of running 4 displays on its own, and therefore that again at least in theory, it should be possible to run 5 total displays, i.e. the built-in display plus 4 external displays.  Spoiler alert: I succeeded.  But some setups that did NOT work gave me some insight as to what conditions have to be satisfied to achieve this.

To be clear, there is a level of conjecture/theory here, but I believe them to be correct because they account for both my observed successes and observed failures.

It appears that in order for an external display to be used, two conditions must be satisfied: 1) A GPU interface must be available to run it, and 2) The interface must have enough available bandwidth for the display being connected. Those sound simple enough, but they bear some unpacking, and both have ramifications for using Thunderbolt docks.

I'll address #2 above first.  DisplayPort MST allows multiple displays to be driven from a single GPU interface that is DisplayPort-based (i.e. an actual DP/MiniDP output, or USB-C/TB3, which use DP for video).  In those situations, the only question is whether the bandwidth requirements of your displays can all fit within the bandwidth available from that shared interface.  Of course it's possible to have a display setup that exceeds the total bandwidth available on an interface, and in that case you've run out of bandwidth -- on that particular interface. This can happen even though there may be other interfaces available that could allow the overall desired display setup to work if cabled differently so that your displays were split across interfaces.

But it turns out it's also possible to run out of GPU interfaces. Interfaces get assigned to ports, and sometimes ports share interfaces back to the GPU.  As a result, if GPU interfaces are already allocated to other display output ports on the chassis, then even if those interfaces have leftover bandwidth, it might not be possible to attach a display to a different port that doesn't have a GPU interface assigned to it at the moment.  So here again, even though the total bandwidth requirements of all displays would fit within what the GPU could handle, you might have to cable things differently.

So when pushing the limits, you essentially have to strike a fine line between a) making sure your displays don't load any single interface beyond what it has the bandwidth to handle, and b) not distributing your display across outputs such that some could end up not being able to access an interface at all.

So what's the connection to Thunderbolt docks?

Thunderbolt 3 can carry up to two independent GPU interfaces if the GPU has two interfaces wired to the TB3 controller.  And how those interfaces get mapped to displays depends on the design of the Thunderbolt dock, how you've connected your displays to it, and sometimes even the capabilities of the system.  For example, reading between the liens of Dell's documentation for the WD19TB dock, it appears that the "downstream TB3" port gets assigned its own interface, and the remaining display output ports share the remaining ports. (How many HBR2 or HBR3 lanes are available in each "segment" of the dock is another question, and that varies based on whether you're using a DP 1.2/HBR2 system or DP 1.4/HBR3 system, but that's not directly germane to this discussion.) The dock I was testing had the exact same setup, with one GPU interface seemingly allocated to a downstream TB3 port, with all other outputs sharing the other.  The takeaway here is that if you have all of your displays cabled to outputs other than the downstream TB3 port, the Thunderbolt dock overall will only be consuming one GPU interface. Same if your only display is connected to the TB3 downstream port.  But if you have your displays split between those, then your Thunderbolt dock will "claim" two GPU interfaces, regardless of how much total bandwidth is required.

This matters for two reasons:

  • Depending on the display setup you want to run, connecting your displays to the dock such that they are split between two interfaces might allow them to work when connecting them all to ports driven by a single interface would not.  For example, when the WD19TB dock is used with a Thunderbolt-capable DP 1.2 system, dual 4K 60 Hz is only possible when one of the displays is connected to the downstream TB3 port, because a single DP 1.2 interface doesn't provide enough bandwidth to run dual 4K 60 Hz, but putting one of them on downstream TB3 splits the bandwidth requirements across two interfaces.
  • On at least some systems, it appears that if you cable displays to the dock in a way that causes the dock to "claim" two GPU interfaces, then you can LOSE the ability to connect a display to other USB-C/TB3 ports available on the system. For example, your system might have two GPU interfaces feeding a pair of USB-C/TB3 ports.  Both GPU interfaces might be allocated to a single port in a TB3 scenario, and in that case, no interface would be available for the other USB-C port.

So what worked and what didn't?

I was testing with a pair of 1440p displays and a pair of 1080p displays, and also keeping my built-in display active. The 1440p displays were always connected to the DisplayPort outputs of the dock and always active.

When I connected a 1080p display to the downstream TB3 port, it lit up. But when I then tried to connect another 1080p display to a second USB-C port on my system, it would not light up or even be detected -- until I disconnected the display on the downstream TB3 port, at which point it DID light up. This appears to be because using the DP and downstream TB3 ports on the dock together consumes both of the GPU interfaces allocated to the system's USB-C/TB3 ports, leaving none available for the other USB-C port. But if the display on downstream TB3 is disconnected, then the dock only uses one interface, making one available for the other USB-C port.

NVIDIA Control Panel hinted at this, because in the PhysX/Surround section, the NVIDIA GPU showed as having two USB-C outputs and an HDMI output (more on HDMI in a moment). That diagram was particularly useful because it showed which displays were connected to each output, including differentiating between the USB-C ports. So when I only had my 1440p displays active, they both showed as connected to the same USB-C port. When I added a display on downstream TB3, it showed as connected to the other USB-C port. And when I disconnected the downstream TB3 display and connected a display directly to the system's other USB-C port, that display was shown on the other USB-C port, separate from the one driving my 1440p displays.

It turned out that the HDMI port had a dedicated interface to the GPU -- again, NVIDIA Control Panel helped illustrate this. So the design that finally worked was this:

  • Dual 1440p displays connected to the dock's DP outputs
  • 1080p display connected to the dock's downstream TB3 port or the system's second USB-C port.
  • 1080p display connected to the system's HDMI output

That gave me 5 total displays, without exceeding the bandwidth available on any single interface or connecting any displays to ports that didn't have an interface available.

No Responses!
No Events found!

Top