<div dir="ltr"><div dir="ltr">Hey, I found the issue. <div>So I'm using NUC12 with a 12th Gen Intel(R) Core(TM) i5-1240P CPU. So I have disabled on my VM CPU nested virtualization and it seem started working (at least boostrap didn't fail).</div><div>So happy Easter!</div><div>Thanks for support</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 31, 2023 at 10:50 AM voipas <<a href="mailto:voipas@gmail.com">voipas@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">Hey, any other recommendations?</div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On 2023-03-23, Thu at 16:25, voipas <<a href="mailto:voipas@gmail.com" target="_blank">voipas@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hey Douglas,<div><br></div><div> Thanks for your response. I recreated a new VM with 24 GB RAM - again after installation, Kubelet is still not launching... Also, attaching disk layout, just in case we have sufficient space</div><div>NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT<br>sda 8:0 0 520G 0 disk<br>|-sda1 8:1 0 1M 0 part<br>|-sda2 8:2 0 29.3G 0 part /var/rootdirs/opt/platform-backup<br>|-sda3 8:3 0 300M 0 part /boot/efi<br>|-sda4 8:4 0 2G 0 part /boot<br>`-sda5 8:5 0 488.4G 0 part<br> |-cgts--vg-root--lv 253:0 0 20G 0 lvm /sysroot<br> |-cgts--vg-var--lv 253:1 0 20G 0 lvm /var<br> |-cgts--vg-log--lv 253:2 0 7.8G 0 lvm /var/log<br> `-cgts--vg-scratch--lv 253:3 0 15.6G 0 lvm /var/rootdirs/scratch<br>sr0 11:0 1 1024M 0 rom<br>nvme0n1 259:0 0 50G 0 disk<br></div><div><br></div><div><br></div><div>I see these kind of errors in daemon log:</div><div><br></div><div>2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: notice /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy directory /var/run/, updating /var/run/kubernetes <E2><86><92> /run/kubernetes; please update the tmpfiles.d/ drop-in file accordingly.<br>2023-03-23T13:58:51.020 localhost systemd-modules-load[438]: info Inserted module 'ib_cm'<br>2023-03-23T13:58:51.020 localhost systemd-modules-load[438]: info Inserted module 'ib_ucm'<br>2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or directory. Ignoring<br>2023-03-23T13:58:51.021 localhost systemd[1]: info Finished Create Static Device Nodes in /dev.<br>2023-03-23T13:58:51.021 localhost systemd[1]: info Starting Rule-based Manager for Device Events and Files...<br>2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted module 'ib_uverbs'<br>2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted module 'iw_cm'<br>2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted module 'rdma_cm'<br>2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted module 'rdma_ucm'<br>2023-03-23T13:58:51.021 localhost systemd-udevd[459]: err /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:8 Unknown user 'ceph', ignoring<br>2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:8 Unknown group 'ceph', ignoring<br>2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:13 Unknown user 'ceph', ignoring<br>2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:13 Unknown group 'ceph', ignoring<br>2023-03-23T13:58:51.022 localhost systemd[1]: info Started Rule-based Manager for Device Events and Files.<br></div><div><br></div><div>2023-03-23T13:58:51.022 localhost systemd[1]: info Starting Apply Kernel Variables...<br>2023-03-23T13:58:51.022 localhost systemd-sysctl[482]: info Couldn't write '20' to 'fs/negative-dentry-limit', ignoring: No such file or directory<br>2023-03-23T13:58:51.022 localhost systemd[1]: info Finished Apply Kernel Variables.<br></div><div><br></div><div>2023-03-23T13:58:51.022 localhost systemd-udevd[474]: info Using interface naming scheme 'vSTX7_0'.<br>2023-03-23T13:58:51.022 localhost systemd-udevd[474]: info ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.<br>2023-03-23T13:58:51.022 localhost systemd-udevd[463]: info ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.<br></div><div><br></div><div>2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: notice /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy directory /var/run/, updating /var/run/kubernetes <E2><86><92> /run/kubernetes; please update the tmpfiles.d/ drop-in file accordingly.<br></div><div><br></div><div>2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or directory. Ignoring<br>2023-03-23T13:58:51.024 localhost systemd[1]: info Finished Create Volatile Files and Directories.<br></div><div><br></div><div>2023-03-23T13:58:51.025 localhost systemd[1]: info Started Kubernetes Kubelet Server.<br>2023-03-23T13:58:51.025 localhost polkitd[801]: info started daemon version 0.105 using authority implementation `local' version `0.105'<br>2023-03-23T13:58:51.025 localhost systemd[863]: info kubelet.service: Failed to locate executable /usr/bin/kubelet: No such file or directory<br>2023-03-23T13:58:51.025 localhost systemd[863]: err kubelet.service: Failed at step EXEC spawning /usr/bin/kubelet: No such file or directory<br>2023-03-23T13:58:51.025 localhost systemd[1]: info Starting Kubernetes Isolated CPU Plugin Daemon...<br></div><div><br></div><div>2023-03-23T13:59:10.551 localhost controller_config[1459]: info Pausing for 5 seconds...<br>2023-03-23T13:59:14.605 localhost lldpd[998]: info removal request for address of fe80::a00:27ff:fe85:8445%2, but no knowledge of it<br>2023-03-23T13:59:14.842 localhost lldpd[998]: info removal request for address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it<br>2023-03-23T13:59:15.565 localhost systemd[1]: notice controllerconfig.service: Main process exited, code=exited, status=1/FAILURE<br>2023-03-23T13:59:15.565 localhost systemd[1]: warning controllerconfig.service: Failed with result 'exit-code'.<br>2023-03-23T13:59:15.579 localhost systemd[1]: info Finished General StarlingX config gate.<br>2023-03-23T13:59:15.581 localhost systemd[1]: info Starting StarlingX Maintenance Filesystem Monitor...<br>2023-03-23T13:59:15.583 localhost systemd[1]: info Started Getty on tty1.<br>2023-03-23T13:59:15.584 localhost systemd[1]: info Reached target Login Prompts.<br>2023-03-23T13:59:15.586 localhost systemd[1]: info Starting StarlingX Maintenance Worker Goenable Ready...<br>2023-03-23T13:59:15.587 localhost systemd[1]: info Starting StarlingX Maintenance Goenable Ready...<br>2023-03-23T13:59:15.588 localhost systemd[1]: info Starting StarlingX Maintenance Heartbeat Client...<br>2023-03-23T13:59:15.589 localhost systemd[1]: info Starting Starling-X Maintenance Link Monitor...<br>2023-03-23T13:59:15.590 localhost systemd[1]: info Starting StarlingX Maintenance Alarm Handler Client...<br>2023-03-23T13:59:15.591 localhost systemd[1]: info Starting StarlingX Maintenance Logger...<br>2023-03-23T13:59:15.593 localhost systemd[1]: info Starting StarlingX Pxeboot Feed Refresh...<br>2023-03-23T13:59:15.594 localhost systemd[1]: info Starting Service Management Unit...<br>2023-03-23T13:59:15.597 localhost systemd[1]: info Finished StarlingX Maintenance Worker Goenable Ready.<br>2023-03-23T13:59:15.610 localhost goenabled[1504]: info Goenabled Ready: [ OK ]<br>2023-03-23T13:59:15.610 localhost systemd[1]: info Finished StarlingX Maintenance Goenable Ready.<br>2023-03-23T13:59:15.630 localhost lmon[1507]: info Starting lmond: OK<br>2023-03-23T13:59:15.630 localhost systemd[1]: info lmon.service: Can't open PID file /run/lmond.pid (yet?) after start: Operation not permitted<br>2023-03-23T13:59:15.633 localhost mtclog[1509]: info Starting mtclogd: OK<br>2023-03-23T13:59:15.634 localhost hbsClient[1506]: info Starting hbsClient: OK<br>2023-03-23T13:59:15.635 localhost fsmon[1501]: info Starting fsmond: OK<br>2023-03-23T13:59:15.636 localhost systemd[1]: info mtclog.service: Can't open PID file /run/mtclogd.pid (yet?) after start: Operation not permitted<br>2023-03-23T13:59:15.636 localhost systemd[1]: info hbsClient.service: Can't open PID file /run/hbsClient.pid (yet?) after start: Operation not permitted<br>2023-03-23T13:59:15.637 localhost systemd[1]: info fsmon.service: Can't open PID file /run/fsmond.pid (yet?) after start: Operation not permitted<br>2023-03-23T13:59:15.637 localhost mtcalarm[1508]: info Starting mtcalarmd: OK<br>2023-03-23T13:59:15.639 localhost systemd[1]: info mtcalarm.service: Can't open PID file /run/mtcalarmd.pid (yet?) after start: Operation not permitted<br></div><div><br></div><div><br></div><div>2023-03-23T14:03:53.832 localhost affine-tasks.sh(1218): info : Recovery wait, elapsed 301 seconds. Reason: k8s-infra not configured<br>2023-03-23T14:08:00.073 localhost avahi-daemon[796]: info Joining mDNS multicast group on interface enp0s3.IPv4 with address 10.0.1.3.<br>2023-03-23T14:08:00.074 localhost avahi-daemon[796]: info New relevant interface enp0s3.IPv4 for mDNS.<br>2023-03-23T14:08:00.074 localhost avahi-daemon[796]: info Registering new address record for 10.0.1.3 on enp0s3.IPv4.<br>2023-03-23T14:08:01.560 localhost avahi-daemon[796]: info Joining mDNS multicast group on interface enp0s3.IPv6 with address fe80::a00:27ff:fe85:8445.<br>2023-03-23T14:08:01.561 localhost avahi-daemon[796]: info New relevant interface enp0s3.IPv6 for mDNS.<br>2023-03-23T14:08:01.561 localhost avahi-daemon[796]: info Registering new address record for fe80::a00:27ff:fe85:8445 on enp0s3.*.<br>2023-03-23T14:08:54.436 localhost affine-tasks.sh(1218): info : Recovery wait, elapsed 602 seconds. Reason: k8s-infra not configured<br>2023-03-23T14:11:06.065 localhost lldpd[998]: info removal request for address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it<br>2023-03-23T14:13:37.869 localhost systemd[1]: info Starting Cleanup of Temporary Directories...<br>2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: notice /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy directory /var/run/, updating /var/run/kubernetes <E2><86><92> /run/kubernetes; please update the tmpfiles.d/ drop-in file accor<br>dingly.<br>2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring<br>2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or directory. Ignoring<br>2023-03-23T14:13:37.912 localhost systemd[1]: info systemd-tmpfiles-clean.service: Succeeded.<br>2023-03-23T14:13:37.912 localhost systemd[1]: info Finished Cleanup of Temporary Directories.<br>2023-03-23T14:13:55.158 localhost affine-tasks.sh(1218): info : Recovery wait, elapsed 903 seconds. Reason: k8s-infra not configured<br>2023-03-23T14:18:55.627 localhost affine-tasks.sh(1218): info : Recovery wait, elapsed 1203 seconds. Reason: k8s-infra not configured<br>2023-03-23T14:22:57.658 localhost lldpd[998]: info removal request for address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 23, 2023 at 3:06 PM Pereira, Douglas <<a href="mailto:Douglas.Pereira@windriver.com" target="_blank">Douglas.Pereira@windriver.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<div lang="EN-US">
<div>
<p class="MsoNormal">Hi Giedrius,<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Have you tried increasing the VM memory? The <a href="https://docs.starlingx.io/deploy_install_guides/release/virtual/install_virtualbox.html#os-type-and-memory-settings" target="_blank">
documentation</a> suggests 20480 MB for the AIO-SX configuration and you are using only 16GB.
<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<p class="MsoNormal">Regards,<u></u><u></u></p>
<p class="MsoNormal">Doug<u></u><u></u></p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div style="border-width:1pt medium medium;border-style:solid none none;padding:3pt 0in 0in;border-color:rgb(225,225,225) currentcolor currentcolor">
<p class="MsoNormal"><b>From:</b> voipas <<a href="mailto:voipas@gmail.com" target="_blank">voipas@gmail.com</a>> <br>
<b>Sent:</b> Wednesday, March 22, 2023 2:40 PM<br>
<b>To:</b> <a href="mailto:starlingx-discuss@lists.starlingx.io" target="_blank">starlingx-discuss@lists.starlingx.io</a><br>
<b>Subject:</b> [Starlingx-discuss] failing to deploy Simplex Starlingx AIO on VirtualBox<u></u><u></u></p>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div style="border:1pt solid rgb(156,101,0);padding:2pt">
<p class="MsoNormal" style="background:0% 50% repeat rgb(252,252,3)"><b><span style="color:black">CAUTION: This email comes from a non Wind River email account!</span></b><span style="color:black"><br>
</span><span style="font-size:10pt;color:black">Do not click links or open attachments unless you recognize the sender and know the content is safe.</span><u></u><u></u></p>
</div>
<div>
<div>
<p class="MsoNormal">Hello colleagues, <u></u><u></u></p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"> I need your support here. Simplex Starlingx AIO installation fails on first steps... <u></u><u></u></p>
</div>
<div>
<ul type="disc">
<li class="MsoNormal">
After installation and reboot I see that kubelet.service - Kubernetes Kubelet Server Failed. Not sure if it is normal or not at this phase... See more details below<u></u><u></u></li><li class="MsoNormal">
Bootstrapping failed - Failed to provision initial system configuration.<u></u><u></u></li></ul>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">I'm trying to install Starlingx on my Intel Nuc box (i5, 64 GB RAM, 2 TB disk) with Ubuntu Desktop OS. VirtualBox version 6.1<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">VM configuration:<u></u><u></u></p>
</div>
<div>
<ul type="disc">
<li class="MsoNormal">
8 vCPU (VT-X/AMD-V, Nested Paging, PAE/NX, KVM Paravirtualization)<u></u><u></u></li><li class="MsoNormal">
16 GB RAM<u></u><u></u></li><li class="MsoNormal">
Storage:<u></u><u></u></li></ul>
<ul type="disc">
<ul type="circle">
<li class="MsoNormal">
Controller SATA 520 GB<u></u><u></u></li><li class="MsoNormal">
Controller NVMe 20 GB<u></u><u></u></li></ul>
</ul>
<ul type="disc">
<li class="MsoNormal">
Network:<u></u><u></u></li></ul>
<ul type="disc">
<ul type="circle">
<li class="MsoNormal">
Intel Pro/1000 MT Desktop - OAM network (internet accessible)<u></u><u></u></li><li class="MsoNormal">
<u></u> <u></u></li><li class="MsoNormal">
Intel Pro/1000 MT Desktop - Data network (internet accessible)<u></u><u></u></li></ul>
</ul>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">I used latest ISO image: <a href="https://urldefense.com/v3/__http:/mirror.starlingx.cengn.ca/mirror/starlingx/release/8.0.0/debian/monolithic/outputs/iso/starlingx-intel-x86-64-cd.iso__;!!AjveYdw8EvQ!ZXtle5v_tdWE8K6-RmYgf73gryp6eV9boFdMTPu3oIL_qnUAGf4x1wpcOydeith-SW2RUKw24CU1VlQ320sh$" target="_blank">http://mirror.starlingx.cengn.ca/mirror/starlingx/release/8.0.0/debian/monolithic/outputs/iso/starlingx-intel-x86-64-cd.iso</a><u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">So I wonder what is wrong with this deployment , am I missing something?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><b>Kubelet failure (</b>/var/log/daemon.log<b>):</b><u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><i><span style="font-size:7.5pt">2023-03-21T19:35:19.876 localhost systemd[1]: info Started StarlingX Affine Tasks.<br>
2023-03-21T19:35:19.968 localhost iscsid: info iSCSI daemon with pid=912 started!<br>
2023-03-21T19:35:20.057 localhost affine-tasks.sh(1211): info : Starting.<br>
2023-03-21T19:35:20.058 localhost affine-tasks.sh(1211): info : Affine all tasks, CPUS: 0-7; online=0-7 (0xff), isol=, nonisol=0-7 (0xff)<br>
2023-03-21T19:35:20.128 localhost affine-tasks.sh(1211): info : Affined 58 processes to all cores.<br>
2023-03-21T19:35:20.302 localhost systemd[1]: info kubelet.service: Scheduled restart job, restart counter is at 5.<br>
2023-03-21T19:35:20.303 localhost systemd[1]: info Stopping Kubernetes Isolated CPU Plugin Daemon...<br>
2023-03-21T19:35:20.304 localhost systemd[1]: info isolcpu_plugin.service: Succeeded.<br>
2023-03-21T19:35:20.305 localhost systemd[1]: info Stopped Kubernetes Isolated CPU Plugin Daemon.<br>
2023-03-21T19:35:20.306 localhost systemd[1]: info Stopped Kubernetes Kubelet Server.<br>
2023-03-21T19:35:20.306 localhost systemd[1]: warning kubelet.service: Start request repeated too quickly.<br>
2023-03-21T19:35:20.306 localhost systemd[1]: warning kubelet.service: Failed with result 'exit-code'.<br>
2023-03-21T19:35:20.306 localhost systemd[1]: err Failed to start Kubernetes Kubelet Server.<br>
2023-03-21T19:35:20.308 localhost systemd[1]: warning Dependency failed for Kubernetes Isolated CPU Plugin Daemon.<br>
2023-03-21T19:35:20.309 localhost systemd[1]: notice isolcpu_plugin.service: Job isolcpu_plugin.service/start failed with result 'dependency'.<br>
2023-03-21T19:35:20.514 localhost sysinv-agent[1012]: info /etc/init.d/sysinv-agent: line 114: [: =: unary operator expected</span></i><u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><b>Bootstrap failure:</b><u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><i><span style="font-size:7.5pt">TASK [bootstrap/persist-config : Fail if populate config script throws an exception] *********************************************************************************************************************************************************************************<br>
Wednesday 22 March 2023 17:29:05 +0000 (0:00:00.024) 0:01:40.002 *******<br>
fatal: [localhost]: FAILED! => changed=false<br>
msg: Failed to provision initial system configuration.<br>
<br>
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************<br>
localhost : ok=180 changed=45 unreachable=0 failed=1 skipped=235 rescued=0 ignored=0</span></i><u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><i><span style="font-size:7.5pt">2023-03-22 17:29:05,960 p=323063 u=sysadmin n=ansible | TASK [bootstrap/persist-config : debug] **********************************************************************************************************************************************************************<br>
********************************************************<br>
2023-03-22 17:29:05,960 p=323063 u=sysadmin n=ansible | Wednesday 22 March 2023 17:29:05 +0000 (0:00:06.932) 0:01:39.978 *******<br>
2023-03-22 17:29:05,981 p=323063 u=sysadmin n=ansible | ok: [localhost] =><br>
populate_result:<br>
changed: true<br>
failed: false<br>
failed_when_result: false<br>
msg: non-zero return code<br>
rc: 1<br>
stderr: |-<br>
Traceback (most recent call last):<br>
File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1327, in <module><br>
populate_service_parameter_config(client)<br>
File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1046, in populate_service_parameter_config<br>
populate_docker_kube_config(client)<br>
File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 838, in populate_docker_kube_config<br>
client.sysinv.service_parameter.delete(parameter.uuid)<br>
File "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line 45, in delete<br>
return self._delete(self._path(parameter_id))<br>
File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", line 95, in _delete<br>
self.api.raw_request('DELETE', url)<br>
File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 224, in raw_request<br>
return self._http_request(url, method, **kwargs)<br>
File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 186, in _http_request<br>
raise exceptions.from_response(<br>
cgtsclient.exc.HTTPInternalServerError: 'int' object is not callable<br>
stderr_lines:<br>
- 'Traceback (most recent call last):'<br>
- ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1327, in <module>'<br>
- ' populate_service_parameter_config(client)'<br>
- ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1046, in populate_service_parameter_config'<br>
- ' populate_docker_kube_config(client)'<br>
- ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 838, in populate_docker_kube_config'<br>
- ' client.sysinv.service_parameter.delete(parameter.uuid)'<br>
- ' File "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line 45, in delete'<br>
- ' return self._delete(self._path(parameter_id))'<br>
- ' File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", line 95, in _delete'<br>
- ' self.api.raw_request(''DELETE'', url)'<br>
- ' File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 224, in raw_request'<br>
- ' return self._http_request(url, method, **kwargs)'<br>
- ' File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 186, in _http_request'<br>
- ' raise exceptions.from_response('<br>
- 'cgtsclient.exc.HTTPInternalServerError: ''int'' object is not callable'<br>
stdout: |-<br>
Updating system config...<br>
System config completed.<br>
Deleting network, routes, addresses, and address pool for network mgmt...<br>
Updating management network...<br>
Deleting network, routes, addresses, and address pool for network pxeboot...<br>
Updating pxeboot network...<br>
Deleting network, routes, addresses, and address pool for network oam...<br>
Updating oam network...<br>
Deleting network, routes, addresses, and address pool for network multicast...<br>
Updating multicast network...<br>
Deleting network, routes, addresses, and address pool for network cluster-host...<br>
Updating cluster host network...<br>
Deleting network, routes, addresses, and address pool for network cluster-pod...<br>
Updating cluster pod network...<br>
Deleting network, routes, addresses, and address pool for network cluster-service...<br>
Updating cluster service network...<br>
Network config completed.<br>
Populating/Updating DNS config...<br>
DNS config completed.</span></i><u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Thanks in advance<br clear="all">
<u></u><u></u></p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<p class="MsoNormal"><span>-- </span><u></u><u></u></p>
<div>
<p class="MsoNormal">Best Regards,<br>
Giedrius<u></u><u></u></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div></blockquote></div><br clear="all"><div><br></div><span>-- </span><br><div dir="ltr">Best Regards,<br>Giedrius</div>
</blockquote></div></div><span>-- </span><br><div dir="ltr">Best Regards,<br>Giedrius</div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature">Best Regards,<br>Giedrius</div></div>