Start a Conversation

Solved!

Go to Solution

22504

March 18th, 2020 07:00

Unable to boot from PCIE NvMe device: What does "unavailable: OS Name" mean?

Poweredge R630

NvME M.2 Drive in PCIE Adapter in Riser 3

All Firmware and BIOS up to date as of today (March 18th 2020)

---

I'm trying to install Ubuntu on my M.2 drive stuck in a Delock PCIE adapter. The drive shows up fine during the Ubuntu installation and after OS install it does show up in the Boot Settings menu but it is listed as "Unavailable: ubuntu"

If I try to boot from it regardless, the boot fails.

I have inspected the drive as I can chroot to it in the Ubuntu Recovery mode, and the first partition is indeed an ESP/EFI partition with the UEFI boot files in the correct locations.

Why is my Poweredge unable to boot from this drive? What are the possible reasons for the BIOS listing it as "Unavailable" and what the heck does this mean? Any workarounds?

I am going to try and install some other OS next and see if the issue persists ... I dunno FreeDOS or some other Linux distro or something if I can get that to work...

7 Posts

March 21st, 2020 12:00

Alright thanks for that Dylan

What I've concluded, basically:

Any ol' consumer m.2 drive is usable on the Poweredge r630 platform regardless of adapters or whatever shenanigans you throw at it. You should just not expect to be able to boot from such a setup, nor should you count on being able to use any sort of hardware raid.

I got the machine to boot the OS on my internal NvMe drive by sticking a SAS HDD drive in there. I simply did manual partitioning during the Ubuntu install and placed my ESP/EFI partition along with a /boot partition on the SAS drive and the remainder (root "/" partition and SWAP) on the NVMe drive and all worked just fine.

This setup is wonky as hell though and gives me zero redundancy. I was really just curious to see if I could see any major performance benefit by running the OS off an NVMe drive. I do not see - as of yet - any major benefit from doing this on this hardware. So what I've decided to do is get my hands on some sturdy SATA/SAS SSDs and stick a few those in my empty bays and use that for my OS and be done with it.

The 4x NVMe drive bays I have with the NVMe enablement kit I'll use as my storage backend for linux containers, using ZFS (striped+mirrored). That seems to work nicely.

However, I'm seeing very low performance numbers all things considered on my NVMe drives - I may just need to tweak some things and hone in on my fio commands, but on the surface of things it seems like the system in general is not able to utilize the full performance of my NVMe drives (I'm getting only about 50% rated IOPS/BW performance in most tests) - which may be due to the fundamental architecture of the system rather than the drives or my adapters. This is all "last generation" hardware, after all. I may update this post if I figure all this out at some point in the near term.

In any case, thank you for your feedback here Dylan much appreciated. 

2.9K Posts

March 18th, 2020 12:00

Hi Arnij,

 

I suspect it's trying to say that it can't find the boot manager on the drive, but knows it should find something.

 

I don't imagine this is going to be something that an alternate Linux release would fix, but don't let that discourage you from trying. I'd still *ABSOLUTELY* try it and see what happens. I'm more inclined to think this may be related to firmware. How does the drive show up in the BIOS and in the iDRAC? I checked and it doesn't look like we had any similar storage approaches (like the BOSS S-1 for the 14th generation), and that somewhat reinforces that idea to me. But, since you confirmed your firmware is up to date, I wouldn't have a recommendation for firmware.

 

As for the hardware you added in, is it something similar to this with an m.2 installed?
https://dell.to/2xN455b

 

Finally, to answer your question about a work around - there's not one that I can think of. The R630 does have NVMe support, but it requires an NVMe backplane to take advantage of it. 

7 Posts

March 18th, 2020 13:00

Hi Dylan!

Thank you so much for your reply. Much appreciated. Ok, so this is what I've tried so far:

- I've tried installing a variety of Linux distros, including Arch Linux as that can be set up to use systemd-boot instead of GRUB: No joy (I don't know what I was talking about w/regards to Freedos earlier, that of course doesn't support UEFI, so yeah)

- I've tried various ways of monkeying around with the bootloader, but to no avail. The current partition layout and bootloader setup should be 100% kosher, but the server simply refuses to boot from the drive

The PCIe adapter I have is even more barebones than the one you linked to, but essentially the same:

https://www.av-cables.dk/m-2-pci-express-kort/delock-pci-express-x16-x4-x8-kort-til-1-x-nvme-m-2.html

We actually do have the NvMe enablement kit installed and the appropriate backplane. The concept of this server was that we want to utilize all 4 U.2 slots with consumer drives (via. a M2. to U2 adapter) in order to get very fast, redundant, hot-pluggable I/O at a low cost. Now, this part of the plan seems to work: When I do have an OS up and running off a USB stick, I can read/write to the drives plugged in to the front just fine. Despite them being in a bit of a Frankenstein-ish setup (for those who are curious, we used a StarTech M.2 to U.2 adapter, part #U2M2E125 and that seems to totally work with our cheap-ish PNY XLR CS3030 M.2 drives)

Anyway, the last thing I tried just now was to remove the internal PCie adapter and m.2 drive and tried to install the OS to a drive plugged into the NvMe backplane instead with identical results. 

Now, this absolutely indicates to me that it is the m.2 SSD itself which is the core of the issue here, and that the server does not like booting from this consumer-grade SSD - as I am fairly certain the server would have booted fine if I had plugged in a "real" U.2 enterprise grade drive. Now, we've seen something similar when we tried to plug in consumer grade SATA SSD drives in our r620 systems, which also made the server complain and we had to upgrade to enterprise grade drives in order to get things working.

As an aside: In the hardware inventory the PCIe SSD is listed but under Storage in iDrac no drive is listed as attached to the backplane.

So, my options are as far as I see them:

1. Try to get hold of an "enterprise grade" m.2 drive and see if that works: How do I know what drives might be compatible / work? I have not been able to find a list of any sort anywhere

2. Just buy a good-ol enterprise SATA drive to hold the OS and boot from that - will work for sure, but I'd really love to have NVMe throughout on this system

3. Stick in an enterprise grade u.2 drive into the nvme backplane, but that would take up one of our slots and we would not be able to do the raid we want (or have to settle for a 2 drive instead of 4 drive raid setup we had in mind)

4. Do some hodge-podge where I stick in some compatible SATA drive to hold the bootloader and then have that spin up the OS on the NVMe (sounds cumbersome...)

5. DELL engineers, due to extreme boredom in these Corona-ridden times decide to do me a personal favor and patch the firmware in order to support my consumer-grade SSD drives (haha)

So, Dylan - any pointers here? If you could point me to an M.2 drive which you know should in theory work, then that would be awesome. I have heard from a friend of a friend that they used an EVO drive with great success... But that's really just hearsay

Thanks again

 

2.9K Posts

March 20th, 2020 13:00

Well, with all that in mind, there's not really an "official" recommendation I could make for you. I can say that I've heard the same thing about Samsung Evo drives. I would caution you that I saw a number of problems with SATA Evo drives showing up properly, so it could easily fall into "Your mileage may vary" territory. I checked the other day, and there weren't any validated m.2 drives (enterprise grade, or not) I could recommend to you. I don't know if the Samsung Pro drives would fair any more reliably or not. Either could be worth trying, but I've heard mixed bag stories both ways.

7 Posts

July 7th, 2020 11:00

Much belated update here: Turns out the NVMe drives I was using were not as good as I thought. Getting some high quality drives in there performed much better and exactly as expected/rated. So the system can indeed utilize PCIe 3.0 drives to their fullest!

1 Message

February 6th, 2021 10:00

I don't know if this helps but they got UltraDimms now that are like SSD's that plug into Dimm slots.  Newer Servers support them.  I would love to try it some day.  Right now just using an old dell server as a work station running a SATA SSD directly from SATA port on motherboard.  RAID card and SAS drives were removed.

System Info:

Machine:  Dell PowerEdge R410
Operating System: Ubuntu 20.04.1 LTS x86_64
Kernel: 5.4.0-60-generic
Desktop Environment: MATE
Window Manager: Compiz
CPU: 2x Intel Xeon X5570 (4 cores each) @ 3.10GHz
GPU: NVIDIA GeForce GT 730
NVIDIA Driver Version 460.32.03
RAM: 32 GB  -must use low voltage RAM or server will crash (PC3L not PC3)
Internet connection: Ethernet Connection to Wireless Nano Station
Other Points:
-UEFI Boot setting in BIOS
-You must disable motherboard graphic chip in BIOS to make graphic card work
-Upgrading to higher frequency CPU's is only limited by wattage supported by motherboard (for dell R410 it is 95 watts)
-OS installation using bootable USB drive.
-Hyper-threading turned Off in BIOS (Logical Cores setting turned Off) (Because Hyperthreading is just hype)

No Events found!

Top