[Starlingx-discuss] Using NVME Disks instead of SD
Saul Wold
sgw at linux.intel.com
Fri Jan 24 21:34:28 UTC 2020
Ok, so I keep trying some different configurations, I went back to
basics and tried a simplex setup and tried to run the set 6 from the AIO
simplex docs page [0].
Output of that step:
> + system host-disk-list controller-0
> +--------------------------------------+--------------+------------+-------------+----------+---------------+-----+------------------+-------------------------------------------+
> | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
> +--------------------------------------+--------------+------------+-------------+----------+---------------+-----+------------------+-------------------------------------------+
> | 9e5c01a9-d409-4df2-bc37-b88cbb80f403 | /dev/nvme0n1 | 66304 | NVME | 476.939 | 0.0 | N/A | BTNH907202MD512A | /dev/disk/by-path/pci-0000:72:00.0-nvme-1 |
> | c34500df-09d2-497c-a90b-dba1f976bfd0 | /dev/nvme1n1 | 66305 | NVME | 476.939 | 476.937 | N/A | BTNH9072074T512A | /dev/disk/by-path/pci-0000:73:00.0-nvme-1 |
> +--------------------------------------+--------------+------------+-------------+----------+---------------+-----+------------------+-------------------------------------------+
> + system host-disk-list controller-0
> + awk '/\/dev\/nvme0n1/{print $2}'
> + xargs -i system host-stor-add controller-0 '{}'
> Please install storage-0 or configure a Ceph monitor on a worker node before adding storage devices.
> + system host-stor-list controller-0
So it seems to require a storage-0 or ceph monitor, which is not
mentioned in the docs. Is there something I am missing here?
[0]
https://docs.starlingx.io/deploy_install_guides/r3_release/bare_metal/aio_simplex_install_kubernetes.html
Sau!
On 1/14/20 6:22 AM, Penney, Don wrote:
> For the second controller and other nodes installing from the active controller, you can set the rootfs_device and boot_device installation parameters via sysinv, either with horizon or the "system host-update" command, such as:
>
> system host-update 2 personality=controller rootfs_device=/dev/nvme0n1 boot_device=/dev/nvme0n1
>
> You can see these settings via host-show:
> system host-show controller-1
>
>
> -----Original Message-----
> From: Saul Wold [mailto:sgw at linux.intel.com]
> Sent: Monday, January 13, 2020 4:03 PM
> To: starlingx-discuss at lists.starlingx.io
> Subject: [Starlingx-discuss] Using NVME Disks instead of SD
>
>
> Folks,
>
> I have been trying to get a NUC based deployment work, with Erich's help
> we managed to start with a pair of Skull Canyon NUCs running Duplex
> controllers. This NUC has a pair of Ethernet ports and can be
> configured with 2 NVME-based disks.
>
> I was able to get the devices booted initially with changes to the ISO
> first by hardcoding /dev/nvme0n1 in metal/bsp-files, later Don's tool
> helped address alternative boot_device and rootfs_device setting, but I
> am not sure it went far enough. Erich figured out that we needed to
> patch the sysinv sqlDB (by hand) in order to unlock and make the second
> controller usable.
>
> I went a little further and tried to hook up both work and storage nodes
> to create a 2+2+2 Standard lab, I had to hand edit the PXE boot cmdline
> and also do the sysinv SQL magic to unlock the compute and storage
> nodes, but the Storage configuration still failed. I am thinking there
> is still some hardcode "sda" and/or "sdb" in places.
>
> Any thoughts on how to enable the Storage nodes properly? I can provide
> logs, configs, ...
>
> I realize this is a problem at initial ISO boot/install to select the
> drive type, but from there is should be automagically detected on the
> target, so it would require setting the drive type of the host at the
> same time as setting the personality so that the anaconda at least knows
> where to start installing. It may be possible for anaconda to determine
> the drive time also.
>
> Should this be a story, launchpad bug or a Spec to allow use to properly
> use NVME devices with StarlingX. Or is this a solved problem that lacks
> proper documentation? Yes, we discussed some of this last March, but I
> guess it's still a problem and I think it's only going to become more
> common.
>
> Thoughts, direction welcome.
>
> Thanks
> Sau!
>
> _______________________________________________
> Starlingx-discuss mailing list
> Starlingx-discuss at lists.starlingx.io
> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
> _______________________________________________
> Starlingx-discuss mailing list
> Starlingx-discuss at lists.starlingx.io
> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
>
More information about the Starlingx-discuss
mailing list