Start a Conversation

Unsolved

This post is more than 5 years old

B

5396

April 28th, 2017 12:00

Alienware Aurora R6 Booting Linux on PCIe M.2

Has anyone been successful booting Linux (Ubuntu, Debian, Fedora). I'm running problems and wondering if people have had any success. 

9 Legend

 • 

47K Posts

April 29th, 2017 16:00

Since you can Live Boot from USB or DVD etc I don't see this as an issue.

I have seen long waits from boot to black screen then video on UBUNTU 16.04.2

If you aren't using Ubuntu or Redhat 7 there may be issues with Secure Boot ON.

If secure Boot OFF CSM Legacy On does not work then there are likely other issues.

You have to build a special kernel for NVME

http://www.intel.com/content/dam/support/us/en/documents/ssdc/data-center-ssds/Intel_Linux_NVMe_Guide_330602-002.pdf 

Pick up a starting distribution, it does not matter from driver’s perspective which distribution you use since it is going to put a new kernel on top of it, so use whatever you are most comfortable with and/or has the tools required. The NVMe driver in kernel 3.19 integrates with new features in a way that makes it more serviceable and debug capable. We recommend this kernel if you are starting a project. Kernel 3.19 brought a new block-mq model as the host of the NVMe driver. This added several reliability, availability, and serviceability features to the driver. The driver is now instrumented in a way that makes the debugging process much simpler, for example, /sys/block//mq has more statistics, and blktraces will be fully instrumented with events if needed.

1. Get the kernel and driver from the 3.x repository. Go to https://www.kernel.org/pub/linux/kernel/v3.x/

For a snapshot, go to: wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.19.tar.xz tar –xvf linux-3.19.tar.xz

2. Build and install.

a. Run menuconfig (which uses ncurses): make menuconfig

b. Confirm that the NVMe Driver under Block is set to Go to Device Drivers-> Block Devices -> NVM Express block device This creates the .config file in same directory.

c. Run as root the following make commands (use the j flag as ½ your cores to improve make time) make –j10 make modules_install install

NOTE: Depending on distribution you use, you may have to run update -initramfs and update -grub, but this is typically unnecessary.

Once installation is successful, reboot the system to load the new kernel and drivers. Usually the new kernel becomes default to boot, which is the top line of menu.lst. This definition file is used for the GRUB booloader. Verify the kernel version with “uname –a” after booting. Use “dmesg | grep –i error” and resolve any kernel loading issues.

May 1st, 2017 11:00

Thanks for the tip. We have both R5 and R6's with NVMe installed and the R5 just boots normally. The R6 fails. 

3 Posts

May 4th, 2017 18:00

System: R6. Bios 1.0.4 (also tried 1.0.3)

I have attempted to install a number of different linux distros (Ubuntu, ubuntu-gnome) on the NVMe M.2 card without success.  I should have given up after 3 days but ended up flogging myself for 5.  

Install works successfully if I install to the 2T HDD spinner which is completely sub optimal.  

Have ordered a much slower SSD (850 pro) and will install to that until the appropriate firmware update is released.

When is this due to happen, Dell?   

8 Wizard

 • 

17K Posts

May 4th, 2017 21:00

r6user wrote:

System: R6. Bios 1.0.4 (also tried 1.0.3)

Have ordered a much slower SSD (850 pro) and will install to that until the appropriate firmware update is released.

 

Good work-around ... SATA3-600 is pretty fast.

 

Hey, I just found this ... I wonder if it works?

https://www.dell.com/support/article/us/en/19/SLN299303/loading-ubuntu-on-systems-using-pcie-m2-drives?lang=EN

9 Legend

 • 

47K Posts

May 5th, 2017 03:00

It works as long as you are using the Developer Kernel or New Enough Version.

Ubuntu Manpage: nvme - the dumb pci-e storage utility 

Ubuntu Manpage: nvme — NVM Express core driver 

ubuntu version

NAME

     nvme — NVM Express core driver 

SYNOPSIS

     To compile this driver into your kernel, place the following line in your
      kernel configuration file:             device nvme     
Or, to load the driver as a module at boot, place the following line in     
loader.conf(5):             nvme_load="YES"      
Most users will also want to enable nvd(4) to surface NVM Express     
namespaces as disk devices which can be partitioned.  Note that in NVM     
Express terms, a namespace is roughly equivalent to a SCSI LUN.

DESCRIPTION

     The nvme driver provides support for NVM Express (NVMe) controllers, such
      as:      
·   Hardware initialization      
·   Per-CPU IO queue pairs      
·   API for registering NVMe namespace consumers such as nvd(4)      
·   API for submitting NVM commands to namespaces      
·   Ioctls for controller and namespace configuration and management      
The nvme driver creates controller device nodes in the format /dev/nvmeX    
and namespace device nodes in the format /dev/nvmeXnsY
Note that the     
NVM Express specification starts numbering namespaces at 1, not 0, and     
this driver follows that convention.

CONFIGURATION

     By default, nvme will create an I/O queue pair for each CPU, provided      
enough MSI-X vectors can be allocated.  To force a single I/O queue pair     
shared by all CPUs, set the following tunable value in loader.conf(5):            
hw.nvme.per_cpu_io_queues=0      
To force legacy interrupts for all nvme driver instances, set the     
following tunable value in loader.conf(5):            
hw.nvme.force_intx=1      
Note that use of INTx implies disabling of per-CPU I/O queue pairs.

SYSCTL VARIABLES

     The following controller-level sysctls are currently implemented:       
dev.nvme.0.int_coal_time            
(R/W) Interrupt coalescing timer period in microseconds. 
Set to              0 to disable.      
dev.nvme.0.int_coal_threshold            
(R/W) Interrupt coalescing threshold in number of command             
completions.  Set to 0 to disable.      
The following queue pair-level sysctls are currently implemented. 
Admin queue sysctls take the format of dev.nvme.0.adminq and I/O queue sysctls     
take the format of dev.nvme.0.ioq0.      
dev.nvme.0.ioq0.num_entries            
(R) Number of entries in this queue pair's command and completion queue.
       dev.nvme.0.ioq0.num_tr            
(R) Number of nvme_tracker structures currently allocated for             
this queue pair.       dev.nvme.0.ioq0.num_prp_list            
(R) Number of nvme_prp_list structures currently allocated for             
this queue pair.       dev.nvme.0.ioq0.sq_head            
(R) Current location of the submission queue head pointer as             
observed by the driver.  The head pointer is incremented by the             
controller as it takes commands off of the submission queue.      
dev.nvme.0.ioq0.sq_tail            
(R) Current location of the submission queue tail pointer as             
observed by the driver.  The driver increments the tail pointer             
after writing a command into the submission queue to signal that             
a new command is ready to be processed.       dev.nvme.0.ioq0.cq_head            
(R) Current location of the completion queue head pointer as             
observed by the driver.  The driver increments the head pointer             
after finishing with a completion entry that was posted by the             
controller.       dev.nvme.0.ioq0.num_cmds            
(R) Number of commands that have been submitted on this queue pair.      
dev.nvme.0.ioq0.dump_debug             (
W) Writing 1 to this sysctl will dump the full contents of the             
submission and completion queues to the console.

SEE ALSO

     nvd(4), pci(4), nvmecontrol(8), disk(9) 

HISTORY

     The nvme driver first appeared in FreeBSD 9.2. 

AUTHORS

     The nvme driver was developed by Intel and originally written by 
Jim Harris ⟨jimharris@FreeBSD.org⟩, with contributions from Joe Golio at EMC.      
This man page was written by Jim Harris ⟨jimharris@FreeBSD.org⟩.


3 Posts

May 7th, 2017 16:00

Thanks for the suggestion.

I had previously tried this and it didn't work.  I'll now have a go at speedstep's suggestion below, I'll report back.

1 Message

May 8th, 2017 21:00

set security on the boot to off, legacy flag 

The second is not needed. Then on rebooting it is insisting on pxe boot . F12 f2f key did not work. Removed the watch battery and it still didn't work. Customers support couldn't help and I had to send it back new aurora r6.ooks like you guys got farther. Did anyone succeed 

I never even got to reading the USB drive 

3 Posts

May 8th, 2017 23:00

I was not prepared to recompile the kernel so did not try speedstep's suggestion. Speedstep, have you successfully installed a linux distro on an R6?

Loading the nvme and nvd modules at boot did not help either.

I'm going the second SATA SSD until another solution presents.

1 Message

June 26th, 2017 14:00

I see Dell released a new BIOS 1.0.5 last week. Has anybody applied it and been successful?

Also, has anyone tried Fedora or any other non Ubuntu distro?

5 Posts

October 11th, 2017 06:00

I can also attest to being unable to successfully boot Ubuntu (16.04 server) on the R6's NVME drive. The installation itself seems to work fine, but it boots to a black screen. I highly doubt this has anything to do with NVIDIA drivers because I was able to install them, just unable to boot the machine afterwards.I have tried the latest BIOS 1.0.9 and all possible hardware settings and GRUB settings. They all end up in a black screen at various times. I thought I made progress after updating BIOS because I got into my Ubuntu log-in screen. However, I got a black screen again after rebooting. This is such a disappointment. Those of you who were hoping to use R6 for machine learning/deep learning purposes may want to look elsewhere. 

5 Posts

October 12th, 2017 07:00

Guys, I got Ubuntu 17.04 to work. Hibernate does not work, but all other systems are go.  My NVME drive is by Liteon. The firmware was up to date, but you may want to check yours. The main steps were:

1. Update bios to 1.09

2. Install Ubuntu 17.04

3. In GRUB, choose advanced options for Ubuntu and boot with secure mode

4. Install NVIDIA drivers. Open blacklist.conf and blacklist 

blacklist vga16fb 
blacklist nouveau
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
blacklist i2c-designware-platform
blacklist i2c-designware-core

I received 'kernel panic' until I blacklisted i2c-designware. 

I also disconnected my second HDD to make everything more simple. That may not be necessary.

Good luck!

5 Practitioner

 • 

274.2K Posts

October 13th, 2017 15:00

where can i get 1.09 bios? dell.com only supply 1.03 bios now. thank you.

5 Posts

October 13th, 2017 16:00

That's bizarre. They seem to have taken both 1.09 and 1.07 down from the website. It's either temporary glitch or they have discovered something alarming about them. In the meantime, you can still attempt installing Ubuntu 17.04. I think the main problem had to do with the kernels in 16.04. 

5 Practitioner

 • 

274.2K Posts

October 15th, 2017 22:00

could you send me 1.09 bios? my email is cvieri@163.com.

thanks a lot:)

2 Posts

October 20th, 2017 22:00

Hi,

1. Did you change the sata from RAID ON to AHCI?

2. Can you enter the USB live mode without black screen?

Thank you!

No Events found!

Top