The problem with using that is the cost of electricity, so I don't think that would be profitable.
You could find a ASIC miner if you are interested in mining. Some people have dozens of these
in what is called mining farms. There is also what's called cloud mining.
I've got the R11 with an 3080 RTX in January, it was running fine for a couple of weeks but then I started to mine ETH on it. After few days the graphic card started to fail and now I can't play or keep it on for more than 30 minutes before crashing. It crashes so badly that a simple restart is not enough, I have to turn off the computer from the mains like 5 min and start it again.
The bottom line is that when I could mine at the beginning I had this thing where the first 5-10 minutes I reached a quite high around 65-70MHs, then for some reason it dropped to around 40MHs, nothing I did could make it improve. Then I heard something about the VRAM getting too how and the GPU card throttling down because of that. I'm still trying to gather more information as I will receive the replacement for the broken computer in a months and I would like to try again.
I would not do it. From my understanding the methods used to mine crypto currency on video cards cause the memory cells to get really hot. You can read the memory cell temperature out with the latest version of a program called HWinfo. It is called GPU Memory Junction Temperature.
When those memory cells get to hot, they will down clock to a lower frequency to prevent damage.
That in turn is supposed to prevent permanent damage to the memory cells.
Considering there's several post on this board about failed RTX 3080 cards, I am presuming they might have failed due to the memory temperatures reaching very high limits, and causing permanent failure. This is just an assumption, not a fact. Only Dell would know what fails on these cards. But obviously there's some component failure happening.
The memory used on the RTX 3080 and RTX 3090 is GDDR6X from the same manufacturer (Micron). As per manufacturer specs that memory can run at 20 Gbps speeds, however Nvidia runs it at 19 Gbps speeds.
There's a lot of discussion about the memory temperatures on various boards, and the general consensus seems to be that Nvidia had a reason to only clock it 19 Gbps and not 20 Gbps. They likely noticed the high temperatures associated with the higher clock rates, and decide to play it safe.
However, it's very easy to overclock memory on this generation of video cards. It's just a slider away on afterburner. Much easier than previous generations, because this is ECC memory and therefore can correct memory errors should they occur. Therefore you can push the limits higher without noticing artifacts on your screen, because the memory will correct the errors resulting from pushing it to far.
Of course, the downside of this is that if you push it to hard you will not notice anything, until it is to late and you permanently damaged the memory cells.
So my advice for the 3000 series cards, is not to overclock the memory and be very careful if you run it overclocked. I would also strongly suggest that if you run games that heavily tax your card, to run HWinfo in the back ground and keep track of the memory tjunction temperature. HWinfo will keep recording and provide you with min/max and average values.
I have seen mine go up to 92 Celsius. I have not experienced any issues with my RTX 3080, but googling it online shows that crypto mining on these cards causes temperatures to rise up to 110 Celsius.
My advice is to keep an eye on it and down clock your memory if you get at upper 90 Celsius to avoid reaching 100 Celsius. 110 Celsius is the point the memory will automatically down clock to protect itself from permanent damage.
I should add to this that the overall consensus seems to be that Nvidia considers up to 100 Celsius to be acceptable for these modules and should not result in permanent damage. I find that personally a bit to high for my liking, low 90's is where I draw the line.
You can read more here: https://www.tomshardware.com/news/hwinfo64-adds-gddr6x-temp-monitoring-rtx30series
had a refurbished R10 Ryzen edition w the 3080 in it, it just didn't like to mine at all at any setting (stock, underclocked or overclocked), it kept crashing after a few minutes, exchanging another one to see if all Dell 3080s are like that.
And even when it was mining, it would run at 85MH/s, anyone else have the same problem or what's the speed u guys are mining at?
There's a "guide" on Tom's hardware on how to configure these cards for mining.
Personally I would not bother, since in the long run it will more than likely cause damage to the card.
The card is worth way more than any profit you could possibly make with it mining.
Anyone thinking about mining should check out the latest video from Gamer Heaven. He did a repast and thermal pads on the 3080 in his R11. The combination of the two dropped his vram temperature by 30c while mining.