[Starlingx-discuss] Distruted StarlingX 4.0 - Worker nodes not booting up
Hi, I'm trying to bring up the edge cloud with 3 nodes (1 controller and 2 worker nodes) with starlingX 4.0 - distributed cloud. Central cloud is up and running with All in One Duplex 2 controller configuration. I was able to bring up the controller-0 in edge cloud using iso (virtual cd/dvd mount) and was able to configure the personality for the other nodes as workers. But worker-0 and worker-1 are stuck in pxe boot for more than 2hrs. Any suggestions? In "Standard controller with storage" configuration, is having 2 controllers compulsory ? https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/contro... This document says it supports 2 controllers and upto 10 worker nodes. Regards, Sriram
Hi, I'm using rel-20.06 software of starlingX-4.0. Initial connection happens to pxe server on controller-0. I do see some packets between worker node and controller-0. Below tcpdump shows those packets [root@controller-0 ~(keystone_admin)]# tcpdump -i any port 69 or port 53 or
port 67 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes 10:42:58.263992 ethertype IPv4, IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 10:42:58.263992 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 10:42:58.264290 IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 10:42:58.264299 ethertype IPv4, IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 10:43:02.301357 ethertype IPv4, IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 10:43:02.301357 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 10:43:02.310717 IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 10:43:02.310725 ethertype IPv4, IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 10:43:02.311555 ethertype IPv4, IP 169.254.202.138.2070 > 169.254.202.1.69: 27 RRQ "pxelinux.0" octet tsize 0 10:43:02.311555 IP 169.254.202.138.2070 > 169.254.202.1.69: 27 RRQ "pxelinux.0" octet tsize 0 10:43:02.311927 ethertype IPv4, IP 169.254.202.138.2071 > 169.254.202.1.69: 32 RRQ "pxelinux.0" octet blksize 1456 10:43:02.311927 IP 169.254.202.138.2071 > 169.254.202.1.69: 32 RRQ "pxelinux.0" octet blksize 1456 10:43:02.358861 ethertype IPv4, IP 169.254.202.138.49152 > 169.254.202.1.69: 79 RRQ "pxelinux.cfg/44454c4c-4800-104c-8034-cac04f473333" octet tsize 0 blksize 1408 ..................
*10:43:40.347690 ethertype IPv4, IP 169.254.202.138.49156 > 169.254.202.1.69: 57 RRQ "rel-20.06/installer-bzImage" octet tsize 0 blksize 140810:43:40.347690 IP 169.254.202.138.49156 > 169.254.202.1.69: 57 RRQ "rel-20.06/installer-bzImage" octet tsize 0 blksize 140810:43:41.035183 ethertype IPv4, IP 169.254.202.138.49157 > 169.254.202.1.69: 56 RRQ "rel-20.06/installer-initrd" octet tsize 0 blksize 140810:43:41.035183 IP 169.254.202.138.49157 > 169.254.202.1.69: 56 RRQ "rel-20.06/installer-initrd" octet tsize 0 blksize 1408*
169.254.202.138 is the worker node ip and 169.254.202.1 is the controller-0 ip. Above 4 are the last packets exchanged and after that no communication is seen. Worker node does not proceed further in installation. Pxe network in the controller node is on vlan-143 and I have enabled the same vlan in bios of the worker node. Please let me know if any info is required. Regards, Sriram On Thu, Oct 1, 2020 at 1:25 PM Sriram <sriram.ec@gmail.com> wrote:
Hi,
I'm trying to bring up the edge cloud with 3 nodes (1 controller and 2 worker nodes) with starlingX 4.0 - distributed cloud. Central cloud is up and running with All in One Duplex 2 controller configuration.
I was able to bring up the controller-0 in edge cloud using iso (virtual cd/dvd mount) and was able to configure the personality for the other nodes as workers. But worker-0 and worker-1 are stuck in pxe boot for more than 2hrs. Any suggestions?
In "Standard controller with storage" configuration, is having 2 controllers compulsory ?
https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/contro... This document says it supports 2 controllers and upto 10 worker nodes.
Regards, Sriram
I was able to resolve the issue. Workers nodes are up and running. Documentation below says that we should use port-based VLAN for pxe booting, where as I had configured trunk VLAN for pxe network. With port-based VLAN things are fine. https://docs.starlingx.io/configuration/host_interface_network_config.html The pxeboot network is an optional network required in scenarios where the
mgmt network cannot be used for PXE booting of hosts. For example, use the pxeboot network when the mgmt network needs to be IPv6 (not currently supported for PXE booting). In these scenarios, the PXE boot network uses a dedicated VLAN (port-based), and the mgmt network uses a separate dedicated VLAN (tagged) on the same port.
Thanks, Sriram On Fri, Oct 2, 2020 at 8:40 AM Sriram <sriram.ec@gmail.com> wrote:
Hi,
I'm using rel-20.06 software of starlingX-4.0. Initial connection happens to pxe server on controller-0. I do see some packets between worker node and controller-0. Below tcpdump shows those packets
[root@controller-0 ~(keystone_admin)]# tcpdump -i any port 69 or port 53
or port 67 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes 10:42:58.263992 ethertype IPv4, IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 10:42:58.263992 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 10:42:58.264290 IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 10:42:58.264299 ethertype IPv4, IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 10:43:02.301357 ethertype IPv4, IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 10:43:02.301357 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 10:43:02.310717 IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 10:43:02.310725 ethertype IPv4, IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 10:43:02.311555 ethertype IPv4, IP 169.254.202.138.2070 > 169.254.202.1.69: 27 RRQ "pxelinux.0" octet tsize 0 10:43:02.311555 IP 169.254.202.138.2070 > 169.254.202.1.69: 27 RRQ "pxelinux.0" octet tsize 0 10:43:02.311927 ethertype IPv4, IP 169.254.202.138.2071 > 169.254.202.1.69: 32 RRQ "pxelinux.0" octet blksize 1456 10:43:02.311927 IP 169.254.202.138.2071 > 169.254.202.1.69: 32 RRQ "pxelinux.0" octet blksize 1456 10:43:02.358861 ethertype IPv4, IP 169.254.202.138.49152 > 169.254.202.1.69: 79 RRQ "pxelinux.cfg/44454c4c-4800-104c-8034-cac04f473333" octet tsize 0 blksize 1408 ..................
*10:43:40.347690 ethertype IPv4, IP 169.254.202.138.49156 > 169.254.202.1.69: 57 RRQ "rel-20.06/installer-bzImage" octet tsize 0 blksize 140810:43:40.347690 IP 169.254.202.138.49156 > 169.254.202.1.69: 57 RRQ "rel-20.06/installer-bzImage" octet tsize 0 blksize 140810:43:41.035183 ethertype IPv4, IP 169.254.202.138.49157 > 169.254.202.1.69: 56 RRQ "rel-20.06/installer-initrd" octet tsize 0 blksize 140810:43:41.035183 IP 169.254.202.138.49157 > 169.254.202.1.69: 56 RRQ "rel-20.06/installer-initrd" octet tsize 0 blksize 1408*
169.254.202.138 is the worker node ip and 169.254.202.1 is the controller-0 ip. Above 4 are the last packets exchanged and after that no communication is seen. Worker node does not proceed further in installation. Pxe network in the controller node is on vlan-143 and I have enabled the same vlan in bios of the worker node.
Please let me know if any info is required.
Regards, Sriram
On Thu, Oct 1, 2020 at 1:25 PM Sriram <sriram.ec@gmail.com> wrote:
Hi,
I'm trying to bring up the edge cloud with 3 nodes (1 controller and 2 worker nodes) with starlingX 4.0 - distributed cloud. Central cloud is up and running with All in One Duplex 2 controller configuration.
I was able to bring up the controller-0 in edge cloud using iso (virtual cd/dvd mount) and was able to configure the personality for the other nodes as workers. But worker-0 and worker-1 are stuck in pxe boot for more than 2hrs. Any suggestions?
In "Standard controller with storage" configuration, is having 2 controllers compulsory ?
https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/contro... This document says it supports 2 controllers and upto 10 worker nodes.
Regards, Sriram
participants (1)
-
Sriram