[Starlingx-discuss] kube-system kube-sriov-device-plugin-amd64-547xh 0/1 CrashLoopBackOff
Pratik M.
pvmpublic at gmail.com
Wed Jan 13 08:46:46 UTC 2021
Hi,
You do not seem to have done the steps mentioned to use SRIOV (in my #2).
Do you plan to use SRIOV? If not, you can ignore the crashloop of the
container. Else you can try if the following helps:
# system host-label-assign controller-0 sriov=disabled
Thanks
On Tue, Jan 12, 2021 at 7:50 PM <lists at optimcloud.com> wrote:
> On Tuesday, January 12, 2021 8:04:34 PM +07 Pratik M. wrote:
> > Hi,
> > 1. What does your /etc/pcidp/config.json look like?
> >
> > 2. Did you do all these steps?
> > # system host-label-assign controller-0 sriovdp=enabled
> > # system host-if-modify controller-0 <interface> -c pci-sriov -n sriov0
> -N
> > <num vfs>
> > # system interface-datanetwork-assign controller-0 <interface>
> > <datanetwork>
> > # system host-unlock
> >
> > Thanks
> > Pratik
> history start to finish
>
> 1 ps xa
> 2 ip addr
> 3 cat <<EOF > localhost.yml
> 4 system_mode: simplex
> 5 dns_servers:
> 6 - 8.8.8.8
> 7 - 8.8.4.4
> 8 external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
> 9 external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
> 10 external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
> 11 admin_username: admin
> 12 admin_password: <admin-password>
> 13 ansible_become_pass: <sysadmin-password>
> 14 EOF
> 15 vi localhost.yml
> 16 ansible-playbook
> /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
> 17 source /etc/platform/openrc
> 18 ip addr
> 19 OAM_IF=eno1
> 20 system host-if-modify controller-0 $OAM_IF -c platform
> 21 system interface-network-assign controller-0 $OAM_IF oam
> 22 system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
> 23 system storage-backend-add ceph --confirmed
> 24 system host-label-assign controller-0 sriovdp=enabled
> 25 system host-memory-modify controller-0 0 -1G 100
> 26 DATA0IF=enp101s0f0
> 27 DATA1IF=enp101s0f1
> 28 export NODE=controller-0
> 29 PHYSNET0='physnet0'
> 30 PHYSNET1='physnet1'
> 31 SPL=/tmp/tmp-system-port-list
> 32 SPIL=/tmp/tmp-system-host-if-list
> 33 system host-port-list ${NODE} --nowrap > ${SPL}
> 34 system host-if-list -a ${NODE} --nowrap > ${SPIL}
> 35 DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
> 36 DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
> 37 DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
> 38 DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
> 39 DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
> 40 DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
> 41 DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12
> ~
> DATA0PORTNAME) {print $2}')
> 42 DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12
> ~
> DATA1PORTNAME) {print $2}')
> 43 system datanetwork-add ${PHYSNET0} vlan
> 44 system datanetwork-add ${PHYSNET1} vlan
> 45 system host-if-modify -m 1500 -n data0 -c data ${NODE}
> ${DATA0IFUUID}
> 46 system host-if-modify -m 1500 -n data1 -c data ${NODE}
> ${DATA1IFUUID}
> 47 system interface-datanetwork-assign ${NODE} ${DATA0IFUUID}
> ${PHYSNET0}
> 48 system interface-datanetwork-assign ${NODE} ${DATA1IFUUID}
> ${PHYSNET1}
> 49 system host-disk-list controller-0
> 50 system host-disk-list controller-0 | awk '/\/dev\/nvme1n1/{print
> $2}' |
> xargs -i system host-stor-add controller-0 {}
> 51 system host-disk-list controller-0 | awk '/\/dev\/nvme2n1/{print
> $2}' |
> xargs -i system host-stor-add controller-0 {}
> 52 system host-label-assign controller-0
> openstack-control-plane=enabled
> 53 system host-label-assign controller-0 openstack-compute-node=enabled
> 54 system host-label-assign controller-0 openvswitch=enabled
> 55 system host-label-assign controller-0 sriov=enabled
> 56 system modify --vswitch_type ovs-dpdk
> 57 system host-cpu-modify -f vswitch -p0 1 controller-0
> 58 export NODE=controller-0
> 59 echo ">>> Getting root disk info"
> 60 ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print
> $4}')
> 61 ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep $
> {ROOT_DISK} | awk '{print $2}')
> 62 echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
> 63 echo ">>>> Configuring nova-local"
> 64 NOVA_SIZE=34
> 65 NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol
> ${NODE}
> ${ROOT_DISK_UUID} ${NOVA_SIZE})
> 66 NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid |
> [a-
> z0-9\-]* |" | awk '{print $4}')
> 67 system host-lvg-add ${NODE} nova-local
> 68 system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
> 69 sleep 2
> 70 system host-memory-list controller-0
> 71 system host-unlock controller-0
> 72 system host-memory-modify controller-0 0 -1G 100
> 73 system host-unlock controller-0
> 74 system host-memory-modify controller-0 1 -1G 100
> 75 system host-label-assign controller-0 openvswitch=enabled
> 76 system host-memory-modify -f vswitch -1G 1 compute-0 0
> 77 system host-memory-modify controller-0 0 -1G 100
> 78 system host-unlock controller-0
> 79 system host-memory-modify -f vswitch -1G 1 compute-0 0
> 80 system host-memory-modify -f vswitch -1G 1 controller-0 0
> 81 system host-unlock controller-0
> 82 history > HIST
>
>
>
> >
> > On Tue, Jan 12, 2021 at 1:39 PM <lists at optimcloud.com> wrote:
> > > any idea how to resolve this, was installed from a recent 01/08/2021
> build
> > > AIO
> > > Simplex ojn bare metal
> > >
> > > kube-system kube-sriov-device-plugin-amd64-547xh 0/1
> > > CrashLoopBackOff 4 2m49s
> > > [sysadmin at controller-0 ~(keystone_admin)]$ kubectl get logs
> > > kube-sriov-device-
> > > plugin-amd64-547xh --namespace kube-system
> > > error: the server doesn't have a resource type "logs"
> > > [sysadmin at controller-0 ~(keystone_admin)]$ kubectl logs
> kube-sriov-device-
> > > plugin-amd64-547xh --namespace kube-system
> > > I0112 08:05:19.714569 158061 manager.go:52] Using Kubelet Plugin
> Registry
> > > Mode
> > > I0112 08:05:19.714707 158061 main.go:44] resource manager reading
> configs
> > > I0112 08:05:19.714789 158061 manager.go:86] raw ResourceList: {
> > >
> > > "resourceList": [
> > >
> > > ]
> > >
> > > }
> > > I0112 08:05:19.714795 158061 manager.go:106] unmarshalled
> ResourceList:
> > > []
> > > E0112 08:05:19.714801 158061 main.go:51] no resource configuration;
> > > exiting
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > Starlingx-discuss mailing list
> > > Starlingx-discuss at lists.starlingx.io
> > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
>
>
>
>
>
> _______________________________________________
> Starlingx-discuss mailing list
> Starlingx-discuss at lists.starlingx.io
> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20210113/2ab5a468/attachment.html>
More information about the Starlingx-discuss
mailing list