[Starlingx-discuss] failing to deploy Simplex Starlingx AIO on VirtualBox

voipas voipas at gmail.com
Fri Apr 7 07:34:21 UTC 2023


Hey, I found the issue.
So I'm using NUC12 with a 12th Gen Intel(R) Core(TM) i5-1240P CPU.  So I
have disabled on my VM CPU nested virtualization and it seem started
working (at least boostrap didn't fail).
So happy Easter!
Thanks for support

On Fri, Mar 31, 2023 at 10:50 AM voipas <voipas at gmail.com> wrote:

> Hey, any other recommendations?
>
> On 2023-03-23, Thu at 16:25, voipas <voipas at gmail.com> wrote:
>
>> Hey Douglas,
>>
>>   Thanks for your response. I recreated a new VM with 24 GB RAM - again
>> after installation, Kubelet is still not launching... Also, attaching disk
>> layout, just in case we have sufficient space
>> NAME                     MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>> sda                        8:0    0   520G  0 disk
>> |-sda1                     8:1    0     1M  0 part
>> |-sda2                     8:2    0  29.3G  0 part
>> /var/rootdirs/opt/platform-backup
>> |-sda3                     8:3    0   300M  0 part /boot/efi
>> |-sda4                     8:4    0     2G  0 part /boot
>> `-sda5                     8:5    0 488.4G  0 part
>>   |-cgts--vg-root--lv    253:0    0    20G  0 lvm  /sysroot
>>   |-cgts--vg-var--lv     253:1    0    20G  0 lvm  /var
>>   |-cgts--vg-log--lv     253:2    0   7.8G  0 lvm  /var/log
>>   `-cgts--vg-scratch--lv 253:3    0  15.6G  0 lvm  /var/rootdirs/scratch
>> sr0                       11:0    1  1024M  0 rom
>> nvme0n1                  259:0    0    50G  0 disk
>>
>>
>> I see these kind of errors in daemon log:
>>
>> 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: notice
>> /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy
>> directory /var/run/, updating /var/run/kubernetes <E2><86><92>
>> /run/kubernetes; please update the tmpfiles.d/ drop-in file accordingly.
>> 2023-03-23T13:58:51.020 localhost systemd-modules-load[438]: info
>> Inserted module 'ib_cm'
>> 2023-03-23T13:58:51.020 localhost systemd-modules-load[438]: info
>> Inserted module 'ib_ucm'
>> 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed
>> to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed
>> to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed
>> to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed
>> to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed
>> to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed
>> to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed
>> to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.021 localhost systemd[1]: info Finished Create Static
>> Device Nodes in /dev.
>> 2023-03-23T13:58:51.021 localhost systemd[1]: info Starting Rule-based
>> Manager for Device Events and Files...
>> 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info
>> Inserted module 'ib_uverbs'
>> 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info
>> Inserted module 'iw_cm'
>> 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info
>> Inserted module 'rdma_cm'
>> 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info
>> Inserted module 'rdma_ucm'
>> 2023-03-23T13:58:51.021 localhost systemd-udevd[459]: err
>> /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:8 Unknown user 'ceph', ignoring
>> 2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err
>> /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:8 Unknown group 'ceph', ignoring
>> 2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err
>> /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:13 Unknown user 'ceph', ignoring
>> 2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err
>> /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:13 Unknown group 'ceph',
>> ignoring
>> 2023-03-23T13:58:51.022 localhost systemd[1]: info Started Rule-based
>> Manager for Device Events and Files.
>>
>> 2023-03-23T13:58:51.022 localhost systemd[1]: info Starting Apply Kernel
>> Variables...
>> 2023-03-23T13:58:51.022 localhost systemd-sysctl[482]: info Couldn't
>> write '20' to 'fs/negative-dentry-limit', ignoring: No such file or
>> directory
>> 2023-03-23T13:58:51.022 localhost systemd[1]: info Finished Apply Kernel
>> Variables.
>>
>> 2023-03-23T13:58:51.022 localhost systemd-udevd[474]: info Using
>> interface naming scheme 'vSTX7_0'.
>> 2023-03-23T13:58:51.022 localhost systemd-udevd[474]: info ethtool:
>> autonegotiation is unset or enabled, the speed and duplex are not writable.
>> 2023-03-23T13:58:51.022 localhost systemd-udevd[463]: info ethtool:
>> autonegotiation is unset or enabled, the speed and duplex are not writable.
>>
>> 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: notice
>> /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy
>> directory /var/run/, updating /var/run/kubernetes <E2><86><92>
>> /run/kubernetes; please update the tmpfiles.d/ drop-in file accordingly.
>>
>> 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed
>> to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed
>> to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed
>> to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed
>> to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed
>> to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed
>> to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed
>> to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or
>> directory. Ignoring
>> 2023-03-23T13:58:51.024 localhost systemd[1]: info Finished Create
>> Volatile Files and Directories.
>>
>> 2023-03-23T13:58:51.025 localhost systemd[1]: info Started Kubernetes
>> Kubelet Server.
>> 2023-03-23T13:58:51.025 localhost polkitd[801]: info started daemon
>> version 0.105 using authority implementation `local' version `0.105'
>> 2023-03-23T13:58:51.025 localhost systemd[863]: info kubelet.service:
>> Failed to locate executable /usr/bin/kubelet: No such file or directory
>> 2023-03-23T13:58:51.025 localhost systemd[863]: err kubelet.service:
>> Failed at step EXEC spawning /usr/bin/kubelet: No such file or directory
>> 2023-03-23T13:58:51.025 localhost systemd[1]: info Starting Kubernetes
>> Isolated CPU Plugin Daemon...
>>
>> 2023-03-23T13:59:10.551 localhost controller_config[1459]: info Pausing
>> for 5 seconds...
>> 2023-03-23T13:59:14.605 localhost lldpd[998]: info removal request for
>> address of fe80::a00:27ff:fe85:8445%2, but no knowledge of it
>> 2023-03-23T13:59:14.842 localhost lldpd[998]: info removal request for
>> address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it
>> 2023-03-23T13:59:15.565 localhost systemd[1]: notice
>> controllerconfig.service: Main process exited, code=exited, status=1/FAILURE
>> 2023-03-23T13:59:15.565 localhost systemd[1]: warning
>> controllerconfig.service: Failed with result 'exit-code'.
>> 2023-03-23T13:59:15.579 localhost systemd[1]: info Finished General
>> StarlingX config gate.
>> 2023-03-23T13:59:15.581 localhost systemd[1]: info Starting StarlingX
>> Maintenance Filesystem Monitor...
>> 2023-03-23T13:59:15.583 localhost systemd[1]: info Started Getty on tty1.
>> 2023-03-23T13:59:15.584 localhost systemd[1]: info Reached target Login
>> Prompts.
>> 2023-03-23T13:59:15.586 localhost systemd[1]: info Starting StarlingX
>> Maintenance Worker Goenable Ready...
>> 2023-03-23T13:59:15.587 localhost systemd[1]: info Starting StarlingX
>> Maintenance Goenable Ready...
>> 2023-03-23T13:59:15.588 localhost systemd[1]: info Starting StarlingX
>> Maintenance Heartbeat Client...
>> 2023-03-23T13:59:15.589 localhost systemd[1]: info Starting Starling-X
>> Maintenance Link Monitor...
>> 2023-03-23T13:59:15.590 localhost systemd[1]: info Starting StarlingX
>> Maintenance Alarm Handler Client...
>> 2023-03-23T13:59:15.591 localhost systemd[1]: info Starting StarlingX
>> Maintenance Logger...
>> 2023-03-23T13:59:15.593 localhost systemd[1]: info Starting StarlingX
>> Pxeboot Feed Refresh...
>> 2023-03-23T13:59:15.594 localhost systemd[1]: info Starting Service
>> Management Unit...
>> 2023-03-23T13:59:15.597 localhost systemd[1]: info Finished StarlingX
>> Maintenance Worker Goenable Ready.
>> 2023-03-23T13:59:15.610 localhost goenabled[1504]: info Goenabled Ready:
>> [  OK  ]
>> 2023-03-23T13:59:15.610 localhost systemd[1]: info Finished StarlingX
>> Maintenance Goenable Ready.
>> 2023-03-23T13:59:15.630 localhost lmon[1507]: info Starting lmond: OK
>> 2023-03-23T13:59:15.630 localhost systemd[1]: info lmon.service: Can't
>> open PID file /run/lmond.pid (yet?) after start: Operation not permitted
>> 2023-03-23T13:59:15.633 localhost mtclog[1509]: info Starting mtclogd: OK
>> 2023-03-23T13:59:15.634 localhost hbsClient[1506]: info Starting
>> hbsClient: OK
>> 2023-03-23T13:59:15.635 localhost fsmon[1501]: info Starting fsmond: OK
>> 2023-03-23T13:59:15.636 localhost systemd[1]: info mtclog.service: Can't
>> open PID file /run/mtclogd.pid (yet?) after start: Operation not permitted
>> 2023-03-23T13:59:15.636 localhost systemd[1]: info hbsClient.service:
>> Can't open PID file /run/hbsClient.pid (yet?) after start: Operation not
>> permitted
>> 2023-03-23T13:59:15.637 localhost systemd[1]: info fsmon.service: Can't
>> open PID file /run/fsmond.pid (yet?) after start: Operation not permitted
>> 2023-03-23T13:59:15.637 localhost mtcalarm[1508]: info Starting
>> mtcalarmd: OK
>> 2023-03-23T13:59:15.639 localhost systemd[1]: info mtcalarm.service:
>> Can't open PID file /run/mtcalarmd.pid (yet?) after start: Operation not
>> permitted
>>
>>
>> 2023-03-23T14:03:53.832 localhost affine-tasks.sh(1218): info : Recovery
>> wait, elapsed 301 seconds. Reason: k8s-infra not configured
>> 2023-03-23T14:08:00.073 localhost avahi-daemon[796]: info Joining mDNS
>> multicast group on interface enp0s3.IPv4 with address 10.0.1.3.
>> 2023-03-23T14:08:00.074 localhost avahi-daemon[796]: info New relevant
>> interface enp0s3.IPv4 for mDNS.
>> 2023-03-23T14:08:00.074 localhost avahi-daemon[796]: info Registering new
>> address record for 10.0.1.3 on enp0s3.IPv4.
>> 2023-03-23T14:08:01.560 localhost avahi-daemon[796]: info Joining mDNS
>> multicast group on interface enp0s3.IPv6 with address
>> fe80::a00:27ff:fe85:8445.
>> 2023-03-23T14:08:01.561 localhost avahi-daemon[796]: info New relevant
>> interface enp0s3.IPv6 for mDNS.
>> 2023-03-23T14:08:01.561 localhost avahi-daemon[796]: info Registering new
>> address record for fe80::a00:27ff:fe85:8445 on enp0s3.*.
>> 2023-03-23T14:08:54.436 localhost affine-tasks.sh(1218): info : Recovery
>> wait, elapsed 602 seconds. Reason: k8s-infra not configured
>> 2023-03-23T14:11:06.065 localhost lldpd[998]: info removal request for
>> address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it
>> 2023-03-23T14:13:37.869 localhost systemd[1]: info Starting Cleanup of
>> Temporary Directories...
>> 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: notice
>> /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy
>> directory /var/run/, updating /var/run/kubernetes <E2><86><92>
>> /run/kubernetes; please update the tmpfiles.d/ drop-in file accor
>> dingly.
>> 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed
>> to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed
>> to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed
>> to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed
>> to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed
>> to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed
>> to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or
>> directory. Ignoring
>> 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed
>> to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or
>> directory. Ignoring
>> 2023-03-23T14:13:37.912 localhost systemd[1]: info
>> systemd-tmpfiles-clean.service: Succeeded.
>> 2023-03-23T14:13:37.912 localhost systemd[1]: info Finished Cleanup of
>> Temporary Directories.
>> 2023-03-23T14:13:55.158 localhost affine-tasks.sh(1218): info : Recovery
>> wait, elapsed 903 seconds. Reason: k8s-infra not configured
>> 2023-03-23T14:18:55.627 localhost affine-tasks.sh(1218): info : Recovery
>> wait, elapsed 1203 seconds. Reason: k8s-infra not configured
>> 2023-03-23T14:22:57.658 localhost lldpd[998]: info removal request for
>> address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it
>>
>> On Thu, Mar 23, 2023 at 3:06 PM Pereira, Douglas <
>> Douglas.Pereira at windriver.com> wrote:
>>
>>> Hi Giedrius,
>>>
>>>
>>>
>>> Have you tried increasing the VM memory? The documentation
>>> <https://docs.starlingx.io/deploy_install_guides/release/virtual/install_virtualbox.html#os-type-and-memory-settings>
>>> suggests 20480 MB for the AIO-SX configuration and you are using only 16GB.
>>>
>>>
>>>
>>> Regards,
>>>
>>> Doug
>>>
>>>
>>>
>>> *From:* voipas <voipas at gmail.com>
>>> *Sent:* Wednesday, March 22, 2023 2:40 PM
>>> *To:* starlingx-discuss at lists.starlingx.io
>>> *Subject:* [Starlingx-discuss] failing to deploy Simplex Starlingx AIO
>>> on VirtualBox
>>>
>>>
>>>
>>> *CAUTION: This email comes from a non Wind River email account!*
>>> Do not click links or open attachments unless you recognize the sender
>>> and know the content is safe.
>>>
>>> Hello colleagues,
>>>
>>>
>>>
>>>   I need your support here. Simplex Starlingx AIO installation fails on
>>> first steps...
>>>
>>>    - After installation and reboot I see that kubelet.service -
>>>    Kubernetes Kubelet Server Failed. Not sure if it is normal or not at this
>>>    phase... See more details below
>>>    - Bootstrapping failed - Failed to provision initial system
>>>    configuration.
>>>
>>>
>>>
>>> I'm trying to install Starlingx on my Intel Nuc box (i5, 64 GB RAM, 2 TB
>>> disk) with Ubuntu Desktop OS. VirtualBox version 6.1
>>>
>>>
>>>
>>> VM configuration:
>>>
>>>    - 8 vCPU (VT-X/AMD-V, Nested Paging, PAE/NX, KVM Paravirtualization)
>>>    - 16 GB RAM
>>>    - Storage:
>>>
>>>
>>>    - Controller SATA 520 GB
>>>       - Controller NVMe 20 GB
>>>
>>>
>>>    - Network:
>>>
>>>
>>>    - Intel Pro/1000 MT Desktop - OAM network (internet accessible)
>>>       -
>>>       - Intel Pro/1000 MT Desktop - Data network (internet accessible)
>>>
>>>
>>>
>>> I used latest ISO image:
>>> http://mirror.starlingx.cengn.ca/mirror/starlingx/release/8.0.0/debian/monolithic/outputs/iso/starlingx-intel-x86-64-cd.iso
>>> <https://urldefense.com/v3/__http:/mirror.starlingx.cengn.ca/mirror/starlingx/release/8.0.0/debian/monolithic/outputs/iso/starlingx-intel-x86-64-cd.iso__;!!AjveYdw8EvQ!ZXtle5v_tdWE8K6-RmYgf73gryp6eV9boFdMTPu3oIL_qnUAGf4x1wpcOydeith-SW2RUKw24CU1VlQ320sh$>
>>>
>>>
>>>
>>> So I wonder what is wrong with this deployment , am I missing something?
>>>
>>>
>>>
>>>
>>>
>>> *Kubelet failure (*/var/log/daemon.log*):*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *2023-03-21T19:35:19.876 localhost systemd[1]: info Started StarlingX
>>> Affine Tasks. 2023-03-21T19:35:19.968 localhost iscsid: info iSCSI daemon
>>> with pid=912 started! 2023-03-21T19:35:20.057 localhost
>>> affine-tasks.sh(1211): info : Starting. 2023-03-21T19:35:20.058 localhost
>>> affine-tasks.sh(1211): info : Affine all tasks, CPUS: 0-7; online=0-7
>>> (0xff), isol=, nonisol=0-7 (0xff) 2023-03-21T19:35:20.128 localhost
>>> affine-tasks.sh(1211): info : Affined 58 processes to all cores.
>>> 2023-03-21T19:35:20.302 localhost systemd[1]: info kubelet.service:
>>> Scheduled restart job, restart counter is at 5. 2023-03-21T19:35:20.303
>>> localhost systemd[1]: info Stopping Kubernetes Isolated CPU Plugin
>>> Daemon... 2023-03-21T19:35:20.304 localhost systemd[1]: info
>>> isolcpu_plugin.service: Succeeded. 2023-03-21T19:35:20.305 localhost
>>> systemd[1]: info Stopped Kubernetes Isolated CPU Plugin Daemon.
>>> 2023-03-21T19:35:20.306 localhost systemd[1]: info Stopped Kubernetes
>>> Kubelet Server. 2023-03-21T19:35:20.306 localhost systemd[1]: warning
>>> kubelet.service: Start request repeated too quickly.
>>> 2023-03-21T19:35:20.306 localhost systemd[1]: warning kubelet.service:
>>> Failed with result 'exit-code'. 2023-03-21T19:35:20.306 localhost
>>> systemd[1]: err Failed to start Kubernetes Kubelet Server.
>>> 2023-03-21T19:35:20.308 localhost systemd[1]: warning Dependency failed for
>>> Kubernetes Isolated CPU Plugin Daemon. 2023-03-21T19:35:20.309 localhost
>>> systemd[1]: notice isolcpu_plugin.service: Job isolcpu_plugin.service/start
>>> failed with result 'dependency'. 2023-03-21T19:35:20.514 localhost
>>> sysinv-agent[1012]: info /etc/init.d/sysinv-agent: line 114: [: =: unary
>>> operator expected*
>>>
>>>
>>>
>>>
>>>
>>> *Bootstrap failure:*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *TASK [bootstrap/persist-config : Fail if populate config script throws
>>> an exception]
>>> *********************************************************************************************************************************************************************************
>>> Wednesday 22 March 2023  17:29:05 +0000 (0:00:00.024)       0:01:40.002
>>> ******* fatal: [localhost]: FAILED! => changed=false   msg: Failed to
>>> provision initial system configuration. PLAY RECAP
>>> ***********************************************************************************************************************************************************************************************************************************************************
>>> localhost                  : ok=180  changed=45   unreachable=0    failed=1
>>>    skipped=235  rescued=0    ignored=0*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *2023-03-22 17:29:05,960 p=323063 u=sysadmin n=ansible | TASK
>>> [bootstrap/persist-config : debug]
>>> **********************************************************************************************************************************************************************
>>> ******************************************************** 2023-03-22
>>> 17:29:05,960 p=323063 u=sysadmin n=ansible | Wednesday 22 March 2023
>>>  17:29:05 +0000 (0:00:06.932)       0:01:39.978 ******* 2023-03-22
>>> 17:29:05,981 p=323063 u=sysadmin n=ansible | ok: [localhost] =>
>>> populate_result:     changed: true     failed: false
>>> failed_when_result: false     msg: non-zero return code     rc: 1
>>> stderr: |-       Traceback (most recent call last):         File
>>> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py",
>>> line 1327, in <module>           populate_service_parameter_config(client)
>>>         File
>>> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py",
>>> line 1046, in populate_service_parameter_config
>>> populate_docker_kube_config(client)         File
>>> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py",
>>> line 838, in populate_docker_kube_config
>>> client.sysinv.service_parameter.delete(parameter.uuid)         File
>>> "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line
>>> 45, in delete           return self._delete(self._path(parameter_id))
>>>   File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", line 95,
>>> in _delete           self.api.raw_request('DELETE', url)         File
>>> "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 224, in
>>> raw_request           return self._http_request(url, method, **kwargs)
>>>     File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line
>>> 186, in _http_request           raise exceptions.from_response(
>>> cgtsclient.exc.HTTPInternalServerError: 'int' object is not callable
>>> stderr_lines:     - 'Traceback (most recent call last):'     - '  File
>>> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py",
>>> line 1327, in <module>'     - '
>>>  populate_service_parameter_config(client)'     - '  File
>>> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py",
>>> line 1046, in populate_service_parameter_config'     - '
>>>  populate_docker_kube_config(client)'     - '  File
>>> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py",
>>> line 838, in populate_docker_kube_config'     - '
>>>  client.sysinv.service_parameter.delete(parameter.uuid)'     - '  File
>>> "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line
>>> 45, in delete'     - '    return self._delete(self._path(parameter_id))'
>>>   - '  File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py",
>>> line 95, in _delete'     - '    self.api.raw_request(''DELETE'', url)'
>>> - '  File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line
>>> 224, in raw_request'     - '    return self._http_request(url, method,
>>> **kwargs)'     - '  File
>>> "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 186, in
>>> _http_request'     - '    raise exceptions.from_response('     -
>>> 'cgtsclient.exc.HTTPInternalServerError: ''int'' object is not callable'
>>>   stdout: |-       Updating system config...       System config completed.
>>>       Deleting network, routes, addresses, and address pool for network
>>> mgmt...       Updating management network...       Deleting network,
>>> routes, addresses, and address pool for network pxeboot...       Updating
>>> pxeboot network...       Deleting network, routes, addresses, and address
>>> pool for network oam...       Updating oam network...       Deleting
>>> network, routes, addresses, and address pool for network multicast...
>>> Updating multicast network...       Deleting network, routes, addresses,
>>> and address pool for network cluster-host...       Updating cluster host
>>> network...       Deleting network, routes, addresses, and address pool for
>>> network cluster-pod...       Updating cluster pod network...       Deleting
>>> network, routes, addresses, and address pool for network cluster-service...
>>>       Updating cluster service network...       Network config completed.
>>>     Populating/Updating DNS config...       DNS config completed.*
>>>
>>>
>>>
>>> Thanks in advance
>>>
>>>
>>>
>>> --
>>>
>>> Best Regards,
>>> Giedrius
>>>
>>
>>
>> --
>> Best Regards,
>> Giedrius
>>
> --
> Best Regards,
> Giedrius
>


-- 
Best Regards,
Giedrius
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20230407/0fc72320/attachment-0001.htm>


More information about the Starlingx-discuss mailing list