[Starlingx-discuss] STX networking

Waines, Greg Greg.Waines at windriver.com
Thu May 12 11:49:32 UTC 2022


I doubt if this is causing issues but shouldn't this:
     pxeboot_subnet: 10.16.48.1/24
be
    pxeboot_subnet: 10.16.48.0/24
?
( and similar issue for oam_subnet )


ACTUALLY ... you have the pxeboot_subnet and the oam_subnet being the same ?
external_oam_subnet: 10.16.48.1/24
pxeboot_subnet: 10.16.48.1/24
That is wrong ... they have to be separate IP subnets.


You should also remove:
external_oam_node_2_address: 10.16.48.116
external_oam_node_3_address: 10.16.48.117
external_oam_node_4_address: 10.16.48.118



Other comment on your config commands
...
system host-if-modify controller-0 $MGMT_IF -c platform 
system interface-network-assign controller-0 $MGMT_IF mgmt 
system interface-network-assign controller-0 $MGMT_IF cluster-host    // you should remove this, as you assign 'cluster-host' network below to a separate interface
...
system host-if-modify controller-0 $CLUSTER_IF -c platform 
system interface-network-assign controller-0 $CLUSTER_IF cluster-host
...




I would redo install using a unique IP Subnet for pxeboot and oam networks,
Greg.


-----Original Message-----
From: Outback Dingo <outbackdingo at gmail.com> 
Sent: Thursday, May 12, 2022 4:30 AM
To: Waines, Greg <Greg.Waines at windriver.com>
Cc: starlingx-discuss at lists.starlingx.io
Subject: Re: [Starlingx-discuss] STX networking

[Please note: This e-mail is from an EXTERNAL e-mail address]

ok, so getting closer its about the ip space and preset variables now

and ...  after installing software on controller-0 last run i used note i set pxeboot_ vars to put the right network and interfaces on bond0
[sysadmin at controller-0 ~(keystone_admin)]$ cat localhost.yml
system_mode: duplex

dns_servers:
  - 8.8.8.8
  - 8.8.4.4

external_oam_subnet: 10.16.48.1/24
external_oam_gateway_address: 10.16.48.1
external_oam_floating_address: 10.16.48.110
external_oam_node_0_address: 10.16.48.114
external_oam_node_1_address: 10.16.48.115
external_oam_node_2_address: 10.16.48.116
external_oam_node_3_address: 10.16.48.117
external_oam_node_4_address: 10.16.48.118

admin_username: admin
admin_password: somepass
ansible_become_pass: somepass

# Add these lines to configure Docker to use a proxy server # # docker_http_proxy: http://my.proxy.com:1080 # # docker_https_proxy: https://my.proxy.com:1443 # # docker_no_proxy:
# #   - 1.2.3.4
#
kubernetes_version: 1.21.3
pxeboot_subnet: 10.16.48.1/24
pxeboot_start_address: 10.16.48.100
pxeboot_end_address: 10.16.48.151

then ran ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
and ... it was successful... onto configuring 
source /etc/platform/openrc 
system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 ae enp33s0 enp49s0 
system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 
system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 
system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0
OAM_IF=bond0.1648
PXE_IF=bond0
 MGMT_IF=bond0.1680
CLUSTER_IF=bond0.1664
 ping 8.8.8.8

system host-if-modify controller-0 lo -c none 
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do     system interface-network-remove ${UUID}; done

system host-if-modify controller-0 $OAM_IF -c platform 
system interface-network-assign controller-0 $OAM_IF oam 

system host-if-modify controller-0 $MGMT_IF -c platform 
system interface-network-assign controller-0 $MGMT_IF mgmt 
system interface-network-assign controller-0 $MGMT_IF cluster-host 

system host-if-modify controller-0 $PXE_IF -c platform 
system interface-network-assign controller-0 $PXE_IF pxeboot 

system host-if-modify controller-0 $CLUSTER_IF -c platform 
system interface-network-assign controller-0 $CLUSTER_IF cluster-host 

system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
system host-label-assign controller-0 openstack-control-plane=enabled 
system host-label-assign controller-0 ceph-mon-placement=enabled 
system host-label-assign controller-0 ceph-mgr-placement=enabled 
system storage-backend-add ceph-rook --confirmed 
system host-unlock controller-0

where controller-0 does reboot and do its boot sequence, then comes up on the correct OAM_IF IP, and DOES have also the correct floating address assigned to bond0.1648

i can actually login, i them waited some minutes and

source /etc/platform/openrc

[sysadmin at controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational |
availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     |
available    |
+----+--------------+-------------+----------------+-------------+--------------+

functionally working...  with the following network config

[sysadmin at controller-0 ~(keystone_admin)]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether ac:1f:6b:60:97:52 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether ac:1f:6b:60:97:53 brd ff:ff:ff:ff:ff:ff
4: enp33s0: <BROADCAST,MULTICAST,PROMISC,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff
5: enp49s0: <BROADCAST,MULTICAST,PROMISC,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff permaddr b8:59:9f:12:2c:fc
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:d5:7a:2c:4c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000
    link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff
    inet 10.16.48.101/24 brd 10.16.48.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet 10.16.48.100/24 scope global secondary bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::ba59:9fff:fe12:3278/64 scope link
       valid_lft forever preferred_lft forever
8: vlan1648 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff
    inet 10.16.48.114/24 brd 10.16.48.255 scope global vlan1648
       valid_lft forever preferred_lft forever
    inet 10.16.48.110/24 scope global secondary vlan1648
       valid_lft forever preferred_lft forever
    inet6 fe80::ba59:9fff:fe12:3278/64 scope link
       valid_lft forever preferred_lft forever
9: vlan1664 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ba59:9fff:fe12:3278/64 scope link
       valid_lft forever preferred_lft forever
10: vlan1680 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000
    link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff
    inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1680
       valid_lft forever preferred_lft forever
    inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1680:12
       valid_lft forever preferred_lft forever
    inet 192.168.206.1/24 scope global secondary vlan1680
       valid_lft forever preferred_lft forever
    inet 192.168.204.1/24 scope global secondary vlan1680
       valid_lft forever preferred_lft forever
    inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1680
       valid_lft forever preferred_lft forever
    inet6 fe80::ba59:9fff:fe12:3278/64 scope link
       valid_lft forever preferred_lft forever
------------------snip----------------------

i pxe booted 2 more nodes, they did pxe fine from controller bond0 with 10.16.48.x as specified in localhost.yml

they did show in system host-list... where i set their personalities.

[sysadmin at controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational |
availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     |
available    |
| 2  | controller-1 | controller  | locked         | disabled    |
offline      |
| 3  | worker-0     | worker      | locked         | disabled    |
offline      |
+----+--------------+-------------+----------------+-------------+--------------+

they both then proceeded to boot.... but now appear hung...... 1+ hours

and i think the problem just might be... 192.168.204 and 192.168.206 addresses on bond0.1680 so....  they need to be 10.16.64.x which is what all our pods talk across for bond0.1664 or 10.16.80.x for bond0.1680

so now they question is which is what in the variables:, i believe i have the proper pxeboot_ and oam with 10.16.48.x on bond0 and
bond0.1648
though you show oam and managment_ as different but i dont think i completely grasp whats mgmt_, cluster_host, cluster_pod, cluster_service and management multicast, so whats what ?
in your ip space compare to what mine should be.

pxeboot_subnet
pxeboot_start_address
pxeboot_end_address

management_subnet
management_start_address
management_end_address
cluster_host_subnet
cluster_host_start_address
cluster_host_end_address
cluster_pod_subnet
cluster_pod_start_address
cluster_pod_end_address
cluster_service_subnet
cluster_service_start_address
cluster_service_end_address
management_multicast_subnet
management_multicast_start_address
management_multicast_end_address



On Thu, May 12, 2022 at 7:44 AM Outback Dingo <outbackdingo at gmail.com> wrote:
>
> working through the configuration now based on findings, and yes i 
> only have to do the ip link commands once prior to bootstrap
>
> i did get bond0 and vlans on a previous try to be configured after 
> system host-unlock controller-0 they were just in the wrong order, so 
> rebuilding the primary node, if it works and put the interfaces and 
> networks on proper interfaces and i can bootstrap controller-1 and get 
> past unlocking that also... i think it will be a win!
>
> On Thu, May 12, 2022 at 7:32 AM Waines, Greg <Greg.Waines at windriver.com> wrote:
> >
> > Were you successful ?
> >
> > ( One question ... you are only having to do the 'ip link ...' 
> > commands BEFORE bootstrap in order to have IP Connectivity to the 
> > outside world for bootstrapping .. correct ? )
> >
> > Greg.
> >
> > -----Original Message-----
> > From: Outback Dingo <outbackdingo at gmail.com>
> > Sent: Wednesday, May 11, 2022 8:21 PM
> > To: Waines, Greg <Greg.Waines at windriver.com>
> > Cc: starlingx-discuss at lists.starlingx.io
> > Subject: Re: [Starlingx-discuss] STX networking
> >
> > [Please note: This e-mail is from an EXTERNAL e-mail address]
> >
> > I think even allowing for special conditions of bond0 to allow OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and even dropping the vlans for corner cases then i would only need to set the Install-time-only parameters:
> > Network Properties
> >
> > pxeboot_subnet: 10.16.48.1
> > pxeboot_start_address 10.16.48.100
> > pxeboot_end_address 10.16.48.125
> > management_subnet
> > management_start_address
> > management_end_address
> > cluster_host_subnet
> > cluster_host_start_address
> > cluster_host_end_address
> > cluster_pod_subnet
> > cluster_pod_start_address
> > cluster_pod_end_address
> > cluster_service_subnet
> > cluster_service_start_address
> > cluster_service_end_address
> > management_multicast_subnet
> > management_multicast_start_address
> > management_multicast_end_address
> >
> > ip link add bond0 type bond
> > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set
> > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 
> > down ip link set enp44s0 master bond0 ip link set bond0 up
> >
> > Set VLAN on the bond device:
> >
> > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set 
> > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 
> > 1664 ip link set bond0.1664 up
> >
> >
> > and modify the host details as per below:
> >
> > OAM_IF=bond0.1648
> > MGMT_IF=bond0.1664
> > CLUSTER_IF=bond0.1680
> > PXE_IF=bond0 <- this puts pxe on bond0
> >
> > On Thu, May 12, 2022 at 2:25 AM Waines, Greg <Greg.Waines at windriver.com> wrote:
> > >
> > > replying to answer your questions from email below, see in-lined 
> > > below, Greg.
> > >
> > > -----Original Message-----
> > > From: Outback Dingo <outbackdingo at gmail.com>
> > > Sent: Wednesday, May 11, 2022 12:00 AM
> > > To: starlingx-discuss at lists.starlingx.io
> > > Subject: [Starlingx-discuss] STX networking
> > >
> > > [Please note: This e-mail is from an EXTERNAL e-mail address]
> > >
> > > scenario...
> > >
> > > i have a host say controller-0
> > >
> > > prior to any ansible run
> > > [Greg] I assume you mean the bootstrap ansible playbook
> > >
> > > i need to create a bond, and bridges and vlans [Greg] You do need 
> > > an interface to the outside world … e.g. in order to download container images from docker hub.
> > > [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ?
> > >
> > > sure....
> > > Add a bond device as root:
> > >
> > > ip link add bond0 type bond
> > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set
> > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 
> > > down ip link set enp44s0 master bond0 ip link set bond0 up
> > >
> > > Set VLAN on the bond device:
> > >
> > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link 
> > > set
> > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 
> > > 1664 ip link set bond0.1664 up ip link add link bond0 name 
> > > bond0.1680 type vlan id 1680 ip link set bond0.1680 up
> > >
> > > Add the bridge device and attach VLAN to it:
> > > ip link add br0 type bridge
> > > ip link set bond0.1648 master br0
> > > ip link set bond0.1664 master br0
> > > ip link set bond0.1680 master br0
> > > ip link set br0 up
> > >
> > > so i see where in starlingx
> > > [Greg] the following commands are only possible AFTER bootstrap
> > >
> > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 
> > > bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 
> > > $OAM_IF -c platform system interface-network-assign controller-0 
> > > $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform 
> > > bond0.1648 vlan
> > > bond0 system host-if-add controller-0 -V 1664 -c platform 
> > > bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c 
> > > platform
> > > bond0.1680 vlan bond0
> > >
> > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge.
> > > [Greg] No … there is no requirement for a bridge with StarlingX.
> > >
> > > where i want to set example, since each needs its own interface 
> > > can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes … 
> > > i.e. OAM on port-based/untagged-vlan of bond and MGMT on 
> > > vlan-tag=1664 on bond ( BUT here is where you need the pxeboot 
> > > network because your MGMT network is vlan-tagged … and you can't 
> > > pxe boot over that )
> > >
> > > OAM_IF=bond0
> > > system host-if-modify controller-0 $OAM_IF -c platform system 
> > > interface-network-assign controller-0 $OAM_IF oam system 
> > > host-if-add
> > > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system 
> > > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 
> > > system host-if-add controller-0 -V 1680 -c platform bond0.1680 
> > > vlan
> > > bond0 system host-if-add controller-0 -V 1672 -c platform 
> > > bond0.1672 vlan bond0
> > > MGMT_IF=bond0.1664
> > > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system 
> > > interface-network-list controller-0 | awk '{if
> > > ($6=="lo") print $4;}')
> > > for UUID in $IFNET_UUIDS; do
> > >     system interface-network-remove ${UUID} done
> > > system host-if-modify controller-0 $MGMT_IF -c platform   [Greg] don't think you actually need this command
> > > system interface-network-assign controller-0 $MGMT_IF mgmt system 
> > > interface-network-assign controller-0 $MGMT_IF cluster-host
> > >
> > > the reason for this being our switches are
> > >
> > > # MGMT
> > > interface vlan1648
> > >   address 10.16.48.2/24
> > >   address-virtual 44:38:39:FF:00:02 10.16.48.1
> > >   vlan-id 1648
> > >   vlan-raw-device bridge
> > >
> > > interface vlan1672
> > >   address 10.16.72.2/24
> > >   address-virtual 44:38:39:FF:00:03 10.16.72.1
> > >   vlan-id 1672
> > >   vlan-raw-device bridge
> > >
> > > interface vlan1680
> > >   address 10.16.80.2/24
> > >   address-virtual 44:38:39:FF:00:03 10.16.80.1
> > >   vlan-id 1680
> > >   vlan-raw-device bridge
> > >
> > > interface vlan1696
> > >   address 10.16.96.2/24
> > >   address-virtual 44:38:39:FF:00:03 10.16.96.1
> > >   vlan-id 1696
> > >   vlan-raw-device bridge
> > >
> > > interface vlan1664
> > >   address 10.16.64.2/24
> > >   address-virtual 44:38:39:FF:00:07 10.16.64.1
> > >   vlan-id 1664
> > >   vlan-raw-device bridge
> > >
> > > and further down DATAIF_0=bond0.1680
> > >
> > > the reason being we are trying to have starlingx conform to our 
> > > networks topology I also noted: in 
> > > https://docs.starlingx.io/deploy_install_guides/r6_release/ansible
> > > _boo
> > > tstrap_configs.html#install-time-only-params-r6
> > > ...
> > >
> > > Network Properties I listed at the bottom
> > >
> > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface.
> > >
> > > Network Properties
> > > pxeboot_subnet
> > > pxeboot_start_address
> > > pxeboot_end_address
> > > management_subnet
> > > management_start_address
> > > management_end_address
> > > cluster_host_subnet
> > > cluster_host_start_address
> > > cluster_host_end_address
> > > cluster_pod_subnet
> > > cluster_pod_start_addres
> > > cluster_pod_end_address
> > > cluster_service_subnet
> > > cluster_service_start_address
> > > cluster_service_end_address
> > > management_multicast_subnet
> > > management_multicast_start_address
> > > management_multicast_end_address
> > >
> > > 7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000
> > >     link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff
> > >     inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0
> > >        valid_lft forever preferred_lft forever
> > >     inet 10.16.48.114/24 scope global secondary bond0
> > >        valid_lft forever preferred_lft forever
> > >     inet6 fe80::ba59:9fff:fe12:34f0/64 scope link
> > >        valid_lft forever preferred_lft forever
> > > 8: vlan1664 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000
> > >     link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff
> > >     inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664
> > >        valid_lft forever preferred_lft forever
> > >     inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12
> > >        valid_lft forever preferred_lft forever
> > >     inet 169.254.202.1/24 scope global vlan1664
> > >        valid_lft forever preferred_lft forever
> > >     inet 192.168.206.1/24 scope global secondary vlan1664
> > >        valid_lft forever preferred_lft forever
> > >     inet 192.168.204.1/24 scope global secondary vlan1664
> > >        valid_lft forever preferred_lft forever
> > >     inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664
> > >        valid_lft forever preferred_lft forever
> > >     inet6 fe80::ba59:9fff:fe12:34f0/64 scope link
> > >        valid_lft forever preferred_lft forever
> > > 9: vlan1672 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
> > >     link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff
> > >     inet6 fe80::ba59:9fff:fe12:34f0/64 scope link
> > >        valid_lft forever preferred_lft forever
> > > 10: vlan1648 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
> > >     link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff
> > >     inet6 fe80::ba59:9fff:fe12:34f0/64 scope link
> > >        valid_lft forever preferred_lft forever
> > > 11: vlan1680 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
> > >     link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff
> > >     inet6 fe80::ba59:9fff:fe12:34f0/64 scope link
> > >        valid_lft forever preferred_lft forever
> > >
> > > _______________________________________________
> > > Starlingx-discuss mailing list
> > > Starlingx-discuss at lists.starlingx.io
> > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu
> > > ss
> > >


More information about the Starlingx-discuss mailing list